(view as text)
[?1034hscons: Reading SConscript files ...
scons version: 2.1.0.alpha.20101125
python version: 2 5 1 'final' 0
Checking whether the C++ compiler works(cached) yes
Checking for C header file unistd.h... (cached) yes
Checking whether clock_gettime is declared... (cached) yes
Checking for C library rt... (cached) yes
Checking for C++ header file execinfo.h... (cached) yes
Checking whether backtrace is declared... (cached) yes
Checking whether backtrace_symbols is declared... (cached) yes
Checking for C library pcap... (cached) yes
scons: done reading SConscript files.
scons: Building targets ...
generate_buildinfo(["build/buildinfo.cpp"], ['\n#include <string>\n#include <boost/version.hpp>\n\n#include "mongo/util/version.h"\n\nnamespace mongo {\n const char * gitVersion() { return "%(git_version)s"; }\n std::string sysInfo() { return "%(sys_info)s BOOST_LIB_VERSION=" BOOST_LIB_VERSION ; }\n} // namespace mongo\n'])
/usr/bin/python /mnt/slaves/Linux_32bit/mongo/buildscripts/smoke.py mongosTest
cwd [/mnt/slaves/Linux_32bit/mongo]
num procs:48
removing: /data/db/sconsTests//mongod.lock
Thu Jun 14 01:21:26
Thu Jun 14 01:21:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Thu Jun 14 01:21:26
Thu Jun 14 01:21:26 [initandlisten] MongoDB starting : pid=20446 port=27999 dbpath=/data/db/sconsTests/ 32-bit host=domU-12-31-39-01-70-B4
Thu Jun 14 01:21:26 [initandlisten]
Thu Jun 14 01:21:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
Thu Jun 14 01:21:26 [initandlisten] ** Not recommended for production.
Thu Jun 14 01:21:26 [initandlisten]
Thu Jun 14 01:21:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Thu Jun 14 01:21:26 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Thu Jun 14 01:21:26 [initandlisten] ** with --journal, the limit is lower
Thu Jun 14 01:21:26 [initandlisten]
Thu Jun 14 01:21:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
Thu Jun 14 01:21:26 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
Thu Jun 14 01:21:26 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
Thu Jun 14 01:21:26 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999 }
Thu Jun 14 01:21:26 [initandlisten] waiting for connections on port 27999
Thu Jun 14 01:21:26 [websvr] admin web console waiting for connections on port 28999
Thu Jun 14 01:21:27 [initandlisten] connection accepted from 127.0.0.1:42052 #1 (1 connection now open)
Thu Jun 14 01:21:27 [conn1] end connection 127.0.0.1:42052 (0 connections now open)
running /mnt/slaves/Linux_32bit/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/
*******************************************
Test : mongos ...
Command : /mnt/slaves/Linux_32bit/mongo/mongos --test
Date : Thu Jun 14 01:21:27 2012
Thu Jun 14 01:21:27 versionCmpTest passed
Thu Jun 14 01:21:27 versionArrayTest passed
Thu Jun 14 01:21:27 shardObjTest passed
Thu Jun 14 01:21:27 _inBalancingWindow: now: 2012-Jun-14 13:48:00 startTime: 2012-Jun-14 09:00:00 stopTime: 2012-Jun-14 11:00:00
Thu Jun 14 01:21:27 _inBalancingWindow: now: 2012-Jun-14 13:48:00 startTime: 2012-Jun-14 17:00:00 stopTime: 2012-Jun-14 21:30:00
Thu Jun 14 01:21:27 _inBalancingWindow: now: 2012-Jun-14 13:48:00 startTime: 2012-Jun-14 11:00:00 stopTime: 2012-Jun-14 17:00:00
Thu Jun 14 01:21:27 _inBalancingWindow: now: 2012-Jun-14 13:48:00 startTime: 2012-Jun-14 21:30:00 stopTime: 2012-Jun-14 17:00:00
Thu Jun 14 01:21:27 warning: must specify both start and end of balancing window: { start: 1 }
Thu Jun 14 01:21:27 warning: must specify both start and end of balancing window: { stop: 1 }
Thu Jun 14 01:21:27 warning: cannot parse active window (use hh:mm 24hs format): { start: "21:30", stop: "28:35" }
Thu Jun 14 01:21:27 BalancingWidowObjTest passed
Thu Jun 14 01:21:27 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Thu Jun 14 01:21:27 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Thu Jun 14 01:21:27 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Thu Jun 14 01:21:27 Matcher::matches() { abcdef: "z23456789" }
Thu Jun 14 01:21:27 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Thu Jun 14 01:21:27 Matcher::matches() { abcdef: "z23456789" }
Thu Jun 14 01:21:27 shardKeyTest passed
tests passed
11.477947ms
Thu Jun 14 01:21:27 [initandlisten] connection accepted from 127.0.0.1:42053 #2 (1 connection now open)
Thu Jun 14 01:21:27 got signal 15 (Terminated), will terminate after current cmd ends
Thu Jun 14 01:21:27 [interruptThread] now exiting
Thu Jun 14 01:21:27 dbexit:
Thu Jun 14 01:21:27 [interruptThread] shutdown: going to close listening sockets...
Thu Jun 14 01:21:27 [interruptThread] closing listening socket: 5
Thu Jun 14 01:21:27 [interruptThread] closing listening socket: 6
Thu Jun 14 01:21:27 [interruptThread] closing listening socket: 7
Thu Jun 14 01:21:27 [interruptThread] removing socket file: /tmp/mongodb-27999.sock
Thu Jun 14 01:21:27 [interruptThread] shutdown: going to flush diaglog...
Thu Jun 14 01:21:27 [interruptThread] shutdown: going to close sockets...
Thu Jun 14 01:21:27 [interruptThread] shutdown: waiting for fs preallocator...
Thu Jun 14 01:21:27 [interruptThread] shutdown: closing all files...
Thu Jun 14 01:21:27 [interruptThread] closeAllFiles() finished
Thu Jun 14 01:21:27 [interruptThread] shutdown: removing fs lock...
Thu Jun 14 01:21:27 dbexit: really exiting now
1 tests succeeded
/usr/bin/python /mnt/slaves/Linux_32bit/mongo/buildscripts/smoke.py sharding
cwd [/mnt/slaves/Linux_32bit/mongo]
num procs:48
removing: /data/db/sconsTests//mongod.lock
Thu Jun 14 01:22:16
Thu Jun 14 01:22:16 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Thu Jun 14 01:22:16
Thu Jun 14 01:22:16 [initandlisten] MongoDB starting : pid=20470 port=27999 dbpath=/data/db/sconsTests/ 32-bit host=domU-12-31-39-01-70-B4
Thu Jun 14 01:22:16 [initandlisten]
Thu Jun 14 01:22:16 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
Thu Jun 14 01:22:16 [initandlisten] ** Not recommended for production.
Thu Jun 14 01:22:16 [initandlisten]
Thu Jun 14 01:22:16 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Thu Jun 14 01:22:16 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Thu Jun 14 01:22:16 [initandlisten] ** with --journal, the limit is lower
Thu Jun 14 01:22:16 [initandlisten]
Thu Jun 14 01:22:16 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
Thu Jun 14 01:22:16 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
Thu Jun 14 01:22:16 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
Thu Jun 14 01:22:16 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999 }
Thu Jun 14 01:22:16 [initandlisten] waiting for connections on port 27999
Thu Jun 14 01:22:16 [websvr] admin web console waiting for connections on port 28999
Thu Jun 14 01:22:17 [initandlisten] connection accepted from 127.0.0.1:42054 #1 (1 connection now open)
running /mnt/slaves/Linux_32bit/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/
*******************************************
Test : addshard1.js ...
Thu Jun 14 01:22:17 [conn1] end connection 127.0.0.1:42054 (0 connections now open)
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard1.js";TestData.testFile = "addshard1.js";TestData.testName = "addshard1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:22:17 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard10'
Thu Jun 14 01:22:17 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/add_shard10
m30000| Thu Jun 14 01:22:17
m30000| Thu Jun 14 01:22:17 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:22:17
m30000| Thu Jun 14 01:22:17 [initandlisten] MongoDB starting : pid=20481 port=30000 dbpath=/data/db/add_shard10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:22:17 [initandlisten]
m30000| Thu Jun 14 01:22:17 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:22:17 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:22:17 [initandlisten]
m30000| Thu Jun 14 01:22:17 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:22:17 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:22:17 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:22:17 [initandlisten]
m30000| Thu Jun 14 01:22:17 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:22:17 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:22:17 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:22:17 [initandlisten] options: { dbpath: "/data/db/add_shard10", port: 30000 }
m30000| Thu Jun 14 01:22:17 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:22:17 [websvr] admin web console waiting for connections on port 31000
"localhost:30000"
m30000| Thu Jun 14 01:22:17 [initandlisten] connection accepted from 127.0.0.1:53641 #1 (1 connection now open)
ShardingTest add_shard1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
Thu Jun 14 01:22:17 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:22:17 [initandlisten] connection accepted from 127.0.0.1:53642 #2 (2 connections now open)
m30000| Thu Jun 14 01:22:17 [FileAllocator] allocating new datafile /data/db/add_shard10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:22:17 [FileAllocator] creating directory /data/db/add_shard10/_tmp
m30999| Thu Jun 14 01:22:17 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:22:17 [mongosMain] MongoS version 2.1.2-pre- starting: pid=20496 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:22:17 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:22:17 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:22:17 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:22:17 [initandlisten] connection accepted from 127.0.0.1:53644 #3 (3 connections now open)
m30000| Thu Jun 14 01:22:18 [FileAllocator] done allocating datafile /data/db/add_shard10/config.ns, size: 16MB, took 0.229 secs
m30000| Thu Jun 14 01:22:18 [FileAllocator] allocating new datafile /data/db/add_shard10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:22:18 [FileAllocator] done allocating datafile /data/db/add_shard10/config.0, size: 16MB, took 0.272 secs
m30000| Thu Jun 14 01:22:18 [FileAllocator] allocating new datafile /data/db/add_shard10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:22:18 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn2] insert config.settings keyUpdates:0 locks(micros) w:523441 523ms
m30000| Thu Jun 14 01:22:18 [initandlisten] connection accepted from 127.0.0.1:53647 #4 (4 connections now open)
m30000| Thu Jun 14 01:22:18 [initandlisten] connection accepted from 127.0.0.1:53648 #5 (5 connections now open)
m30000| Thu Jun 14 01:22:18 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:18 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:22:18 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:22:18 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:22:18 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:22:18 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:22:18 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:18 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:22:18 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:22:18
m30999| Thu Jun 14 01:22:18 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:22:18 [initandlisten] connection accepted from 127.0.0.1:53649 #6 (6 connections now open)
m30000| Thu Jun 14 01:22:18 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:18 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651338:1804289383' acquired, ts : 4fd9750a500d8a7b0a7e0d7a
m30999| Thu Jun 14 01:22:18 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651338:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:22:18 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:18 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:22:18 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:22:18 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651338:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:22:18 [mongosMain] connection accepted from 127.0.0.1:52500 #1 (1 connection now open)
m30999| Thu Jun 14 01:22:18 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:22:18 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:22:18 [conn4] build index done. scanned 0 total records. 0.025 secs
m30999| Thu Jun 14 01:22:18 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:22:18 [FileAllocator] done allocating datafile /data/db/add_shard10/config.1, size: 32MB, took 0.587 secs
m30000| Thu Jun 14 01:22:18 [conn5] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:349 w:1443 reslen:177 470ms
m30999| Thu Jun 14 01:22:18 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
m30000| Thu Jun 14 01:22:18 [initandlisten] connection accepted from 127.0.0.1:53651 #7 (7 connections now open)
m30999| Thu Jun 14 01:22:18 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9750a500d8a7b0a7e0d79
Thu Jun 14 01:22:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/29000 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m29000| note: noprealloc may hurt performance in many applications
m29000| Thu Jun 14 01:22:18
m29000| Thu Jun 14 01:22:18 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:22:18
m29000| Thu Jun 14 01:22:18 [initandlisten] MongoDB starting : pid=20520 port=29000 dbpath=/data/db/29000 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:22:18 [initandlisten]
m29000| Thu Jun 14 01:22:18 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:22:18 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:22:18 [initandlisten]
m29000| Thu Jun 14 01:22:18 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:22:18 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:22:18 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:22:18 [initandlisten]
m29000| Thu Jun 14 01:22:18 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:22:18 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:22:18 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:22:18 [initandlisten] options: { dbpath: "/data/db/29000", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 29000, smallfiles: true }
m29000| Thu Jun 14 01:22:18 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:22:19 [initandlisten] connection accepted from 127.0.0.1:46016 #1 (1 connection now open)
m29000| Thu Jun 14 01:22:19 [FileAllocator] allocating new datafile /data/db/29000/testDB.ns, filling with zeroes...
m29000| Thu Jun 14 01:22:19 [FileAllocator] creating directory /data/db/29000/_tmp
m29000| Thu Jun 14 01:22:19 [FileAllocator] done allocating datafile /data/db/29000/testDB.ns, size: 16MB, took 0.266 secs
m29000| Thu Jun 14 01:22:19 [FileAllocator] allocating new datafile /data/db/29000/testDB.0, filling with zeroes...
m29000| Thu Jun 14 01:22:19 [FileAllocator] done allocating datafile /data/db/29000/testDB.0, size: 16MB, took 0.323 secs
m29000| Thu Jun 14 01:22:19 [conn1] build index testDB.foo { _id: 1 }
m29000| Thu Jun 14 01:22:19 [conn1] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:22:19 [conn1] insert testDB.foo keyUpdates:0 locks(micros) w:603981 603ms
m29000| Thu Jun 14 01:22:19 [initandlisten] connection accepted from 127.0.0.1:46017 #2 (2 connections now open)
m30999| Thu Jun 14 01:22:19 [conn] going to add shard: { _id: "myShard", host: "localhost:29000" }
m30999| Thu Jun 14 01:22:19 [conn] couldn't find database [testDB] in config db
m30999| Thu Jun 14 01:22:19 [conn] put [testDB] on: myShard:localhost:29000
Thu Jun 14 01:22:19 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29001 --dbpath /data/db/29001 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m29001| note: noprealloc may hurt performance in many applications
m29001| Thu Jun 14 01:22:19
m29001| Thu Jun 14 01:22:19 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29001| Thu Jun 14 01:22:19
m29001| Thu Jun 14 01:22:19 [initandlisten] MongoDB starting : pid=20534 port=29001 dbpath=/data/db/29001 32-bit host=domU-12-31-39-01-70-B4
m29001| Thu Jun 14 01:22:19 [initandlisten]
m29001| Thu Jun 14 01:22:19 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29001| Thu Jun 14 01:22:19 [initandlisten] ** Not recommended for production.
m29001| Thu Jun 14 01:22:19 [initandlisten]
m29001| Thu Jun 14 01:22:19 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29001| Thu Jun 14 01:22:19 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29001| Thu Jun 14 01:22:19 [initandlisten] ** with --journal, the limit is lower
m29001| Thu Jun 14 01:22:19 [initandlisten]
m29001| Thu Jun 14 01:22:19 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29001| Thu Jun 14 01:22:19 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29001| Thu Jun 14 01:22:19 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29001| Thu Jun 14 01:22:19 [initandlisten] options: { dbpath: "/data/db/29001", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 29001, smallfiles: true }
m29001| Thu Jun 14 01:22:19 [initandlisten] waiting for connections on port 29001
m29001| Thu Jun 14 01:22:19 [initandlisten] connection accepted from 127.0.0.1:48066 #1 (1 connection now open)
m29001| Thu Jun 14 01:22:19 [FileAllocator] allocating new datafile /data/db/29001/otherDB.ns, filling with zeroes...
m29001| Thu Jun 14 01:22:19 [FileAllocator] creating directory /data/db/29001/_tmp
m29001| Thu Jun 14 01:22:20 [FileAllocator] done allocating datafile /data/db/29001/otherDB.ns, size: 16MB, took 0.247 secs
m29001| Thu Jun 14 01:22:20 [FileAllocator] allocating new datafile /data/db/29001/otherDB.0, filling with zeroes...
m29001| Thu Jun 14 01:22:20 [FileAllocator] done allocating datafile /data/db/29001/otherDB.0, size: 16MB, took 0.297 secs
m29001| Thu Jun 14 01:22:20 [conn1] build index otherDB.foo { _id: 1 }
m29001| Thu Jun 14 01:22:20 [conn1] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:22:20 [conn1] insert otherDB.foo keyUpdates:0 locks(micros) w:558375 558ms
m29001| Thu Jun 14 01:22:20 [FileAllocator] allocating new datafile /data/db/29001/testDB.ns, filling with zeroes...
m29001| Thu Jun 14 01:22:20 [FileAllocator] done allocating datafile /data/db/29001/testDB.ns, size: 16MB, took 0.286 secs
m29001| Thu Jun 14 01:22:20 [FileAllocator] allocating new datafile /data/db/29001/testDB.0, filling with zeroes...
m29001| Thu Jun 14 01:22:21 [FileAllocator] done allocating datafile /data/db/29001/testDB.0, size: 16MB, took 0.373 secs
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testDB", "partitioned" : false, "primary" : "myShard" }
m29001| Thu Jun 14 01:22:21 [conn1] build index testDB.foo { _id: 1 }
m29001| Thu Jun 14 01:22:21 [conn1] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:22:21 [conn1] insert testDB.foo keyUpdates:0 locks(micros) w:1228606 670ms
m29001| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:48068 #2 (2 connections now open)
m30999| Thu Jun 14 01:22:21 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd9750a500d8a7b0a7e0d79
m30999| Thu Jun 14 01:22:21 [conn] addshard request { addshard: "localhost:29001", name: "rejectedShard" } failed: can't add shard localhost:29001 because a local database 'testDB' exists in another myShard:localhost:29000
m29000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:46020 #3 (3 connections now open)
m30999| Thu Jun 14 01:22:21 [conn] couldn't find database [otherDB] in config db
m29000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:46022 #4 (4 connections now open)
m30999| Thu Jun 14 01:22:21 [conn] put [otherDB] on: myShard:localhost:29000
m30999| Thu Jun 14 01:22:21 [conn] Moving testDB primary from: myShard:localhost:29000 to: shard0000:localhost:30000
m30999| Thu Jun 14 01:22:21 [conn] created new distributed lock for testDB-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:22:21 [conn] distributed lock 'testDB-movePrimary/domU-12-31-39-01-70-B4:30999:1339651338:1804289383' acquired, ts : 4fd9750d500d8a7b0a7e0d7b
m29000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:46023 #5 (5 connections now open)
m30000| Thu Jun 14 01:22:21 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.ns, filling with zeroes...
m30000| Thu Jun 14 01:22:21 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.ns, size: 16MB, took 0.393 secs
m30000| Thu Jun 14 01:22:21 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.0, filling with zeroes...
m30000| Thu Jun 14 01:22:21 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.0, size: 16MB, took 0.288 secs
m30000| Thu Jun 14 01:22:21 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.1, filling with zeroes...
m30000| Thu Jun 14 01:22:21 [conn6] build index testDB.foo { _id: 1 }
m30000| Thu Jun 14 01:22:21 [conn6] fastBuildIndex dupsToDrop:0
m30000| Thu Jun 14 01:22:21 [conn6] build index done. scanned 3 total records. 0.001 secs
m30000| Thu Jun 14 01:22:21 [conn6] command testDB.$cmd command: { clone: "localhost:29000", collsToIgnore: {} } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:37 r:845 w:694593 reslen:73 694ms
m30999| Thu Jun 14 01:22:21 [conn] movePrimary dropping database on localhost:29000, no sharded collections in testDB
m29000| Thu Jun 14 01:22:21 [conn5] end connection 127.0.0.1:46023 (4 connections now open)
m29000| Thu Jun 14 01:22:21 [conn4] dropDatabase testDB
m30999| Thu Jun 14 01:22:21 [conn] distributed lock 'testDB-movePrimary/domU-12-31-39-01-70-B4:30999:1339651338:1804289383' unlocked.
m30999| Thu Jun 14 01:22:21 [conn] enabling sharding on: testDB
m30999| Thu Jun 14 01:22:21 [conn] CMD: shardcollection: { shardcollection: "testDB.foo", key: { a: 1.0 } }
m30999| Thu Jun 14 01:22:21 [conn] enable sharding on: testDB.foo with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:22:21 [conn] going to create 1 chunk(s) for: testDB.foo using new epoch 4fd9750d500d8a7b0a7e0d7c
m30999| Thu Jun 14 01:22:21 [conn] ChunkManager: time to load chunks for testDB.foo: 0ms sequenceNumber: 2 version: 1|0||4fd9750d500d8a7b0a7e0d7c based on: (empty)
m30999| Thu Jun 14 01:22:21 [conn] splitting: testDB.foo shard: ns:testDB.foo at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey }
m30000| Thu Jun 14 01:22:21 [conn7] build index testDB.foo { a: 1.0 }
m30000| Thu Jun 14 01:22:21 [conn7] build index done. scanned 3 total records. 0 secs
m30000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:53661 #8 (8 connections now open)
m30000| Thu Jun 14 01:22:21 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:22:21 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:21 [conn7] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:53662 #9 (9 connections now open)
m30000| Thu Jun 14 01:22:21 [conn6] received splitChunk request: { splitChunk: "testDB.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0000", splitKeys: [ { a: 1.0 } ], shardId: "testDB.foo-a_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:22:21 [conn6] created new distributed lock for testDB.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:22:21 [conn6] distributed lock 'testDB.foo/domU-12-31-39-01-70-B4:30000:1339651341:363714672' acquired, ts : 4fd9750d2b736c64a06da9ac
m30000| Thu Jun 14 01:22:21 [conn6] splitChunk accepted at version 1|0||4fd9750d500d8a7b0a7e0d7c
m30000| Thu Jun 14 01:22:21 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:22:21-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53649", time: new Date(1339651341904), what: "split", ns: "testDB.foo", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9750d500d8a7b0a7e0d7c') }, right: { min: { a: 1.0 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9750d500d8a7b0a7e0d7c') } } }
m30000| Thu Jun 14 01:22:21 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651341:363714672 (sleeping for 30000ms)
m30000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:53663 #10 (10 connections now open)
m30000| Thu Jun 14 01:22:21 [conn6] distributed lock 'testDB.foo/domU-12-31-39-01-70-B4:30000:1339651341:363714672' unlocked.
m30000| Thu Jun 14 01:22:21 [initandlisten] connection accepted from 127.0.0.1:53664 #11 (11 connections now open)
m30999| Thu Jun 14 01:22:21 [conn] ChunkManager: time to load chunks for testDB.foo: 30ms sequenceNumber: 3 version: 1|2||4fd9750d500d8a7b0a7e0d7c based on: 1|0||4fd9750d500d8a7b0a7e0d7c
m29000| Thu Jun 14 01:22:21 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:22:21 [interruptThread] now exiting
m29000| Thu Jun 14 01:22:21 dbexit:
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:22:21 [interruptThread] closing listening socket: 16
m29000| Thu Jun 14 01:22:21 [interruptThread] closing listening socket: 17
m29000| Thu Jun 14 01:22:21 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:22:21 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:22:21 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:22:21 dbexit: really exiting now
m30999| Thu Jun 14 01:22:21 [WriteBackListener-localhost:29000] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:22:21 [WriteBackListener-localhost:29000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd9750a500d8a7b0a7e0d79') }
m30000| Thu Jun 14 01:22:22 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.1, size: 32MB, took 0.688 secs
Thu Jun 14 01:22:22 shell: stopped mongo program on port 29000
m29001| Thu Jun 14 01:22:22 got signal 15 (Terminated), will terminate after current cmd ends
m29001| Thu Jun 14 01:22:22 [interruptThread] now exiting
m29001| Thu Jun 14 01:22:22 dbexit:
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: going to close listening sockets...
m29001| Thu Jun 14 01:22:22 [interruptThread] closing listening socket: 19
m29001| Thu Jun 14 01:22:22 [interruptThread] closing listening socket: 20
m29001| Thu Jun 14 01:22:22 [interruptThread] removing socket file: /tmp/mongodb-29001.sock
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: going to flush diaglog...
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: going to close sockets...
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: waiting for fs preallocator...
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: closing all files...
m30999| Thu Jun 14 01:22:22 [WriteBackListener-localhost:29000] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:22:22 [WriteBackListener-localhost:29000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd9750a500d8a7b0a7e0d79') }
m29001| Thu Jun 14 01:22:22 [interruptThread] closeAllFiles() finished
m29001| Thu Jun 14 01:22:22 [interruptThread] shutdown: removing fs lock...
m29001| Thu Jun 14 01:22:22 dbexit: really exiting now
Thu Jun 14 01:22:23 shell: stopped mongo program on port 29001
m30999| Thu Jun 14 01:22:23 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:22:23 [conn4] end connection 127.0.0.1:53647 (10 connections now open)
m30000| Thu Jun 14 01:22:23 [conn7] end connection 127.0.0.1:53651 (9 connections now open)
m30000| Thu Jun 14 01:22:23 [conn6] end connection 127.0.0.1:53649 (9 connections now open)
m30000| Thu Jun 14 01:22:23 [conn3] end connection 127.0.0.1:53644 (7 connections now open)
m30000| Thu Jun 14 01:22:23 [conn8] end connection 127.0.0.1:53661 (7 connections now open)
Thu Jun 14 01:22:24 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:22:24 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:22:24 [interruptThread] now exiting
m30000| Thu Jun 14 01:22:24 dbexit:
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:22:24 [interruptThread] closing listening socket: 9
m30000| Thu Jun 14 01:22:24 [interruptThread] closing listening socket: 10
m30000| Thu Jun 14 01:22:24 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:22:24 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:22:24 [conn11] end connection 127.0.0.1:53664 (5 connections now open)
m30000| Thu Jun 14 01:22:24 [conn10] end connection 127.0.0.1:53663 (5 connections now open)
m30000| Thu Jun 14 01:22:24 [conn9] end connection 127.0.0.1:53662 (5 connections now open)
m30000| Thu Jun 14 01:22:24 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:22:24 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:22:24 dbexit: really exiting now
Thu Jun 14 01:22:25 shell: stopped mongo program on port 30000
*** ShardingTest add_shard1 completed successfully in 8.449 seconds ***
8507.930994ms
Thu Jun 14 01:22:25 [initandlisten] connection accepted from 127.0.0.1:42080 #2 (1 connection now open)
*******************************************
Test : addshard2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard2.js";TestData.testFile = "addshard2.js";TestData.testName = "addshard2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:22:25 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard20'
Thu Jun 14 01:22:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/add_shard20
m30000| Thu Jun 14 01:22:26
m30000| Thu Jun 14 01:22:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:22:26
m30000| Thu Jun 14 01:22:26 [initandlisten] MongoDB starting : pid=20569 port=30000 dbpath=/data/db/add_shard20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:22:26 [initandlisten]
m30000| Thu Jun 14 01:22:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:22:26 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:22:26 [initandlisten]
m30000| Thu Jun 14 01:22:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:22:26 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:22:26 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:22:26 [initandlisten]
m30000| Thu Jun 14 01:22:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:22:26 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:22:26 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:22:26 [initandlisten] options: { dbpath: "/data/db/add_shard20", port: 30000 }
m30000| Thu Jun 14 01:22:26 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:22:26 [initandlisten] waiting for connections on port 30000
"domU-12-31-39-01-70-B4:30000"
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 127.0.0.1:53667 #1 (1 connection now open)
ShardingTest add_shard2 :
{
"config" : "domU-12-31-39-01-70-B4:30000",
"shards" : [
connection to domU-12-31-39-01-70-B4:30000
]
}
Thu Jun 14 01:22:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:30000
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 10.255.119.66:40591 #2 (2 connections now open)
m30000| Thu Jun 14 01:22:26 [FileAllocator] allocating new datafile /data/db/add_shard20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:22:26 [FileAllocator] creating directory /data/db/add_shard20/_tmp
m30999| Thu Jun 14 01:22:26 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:22:26 [mongosMain] MongoS version 2.1.2-pre- starting: pid=20584 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:22:26 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:22:26 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:22:26 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:30000", port: 30999 }
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 10.255.119.66:40593 #3 (3 connections now open)
m30000| Thu Jun 14 01:22:26 [FileAllocator] done allocating datafile /data/db/add_shard20/config.ns, size: 16MB, took 0.273 secs
m30000| Thu Jun 14 01:22:26 [FileAllocator] allocating new datafile /data/db/add_shard20/config.0, filling with zeroes...
m30000| Thu Jun 14 01:22:26 [FileAllocator] done allocating datafile /data/db/add_shard20/config.0, size: 16MB, took 0.308 secs
m30000| Thu Jun 14 01:22:26 [FileAllocator] allocating new datafile /data/db/add_shard20/config.1, filling with zeroes...
m30000| Thu Jun 14 01:22:26 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn2] insert config.settings keyUpdates:0 locks(micros) w:597977 597ms
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 10.255.119.66:40596 #4 (4 connections now open)
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 10.255.119.66:40597 #5 (5 connections now open)
m30000| Thu Jun 14 01:22:26 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:26 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:22:26 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:22:26 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:22:26 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:22:26 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:22:26 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:26 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:22:26 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:22:26
m30999| Thu Jun 14 01:22:26 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:22:26 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [initandlisten] connection accepted from 10.255.119.66:40598 #6 (6 connections now open)
m30000| Thu Jun 14 01:22:26 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:26 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' acquired, ts : 4fd975125e73225e7386c291
m30999| Thu Jun 14 01:22:26 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' unlocked.
m30999| Thu Jun 14 01:22:26 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:30000 and process domU-12-31-39-01-70-B4:30999:1339651346:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:22:26 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:22:26 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:22:26 [mongosMain] connection accepted from 127.0.0.1:52526 #1 (1 connection now open)
m30999| Thu Jun 14 01:22:26 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:22:26 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:22:26 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:22:26 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:22:26 [conn] going to add shard: { _id: "shard0000", host: "domU-12-31-39-01-70-B4:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
Thu Jun 14 01:22:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/add_shard21 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m30001| note: noprealloc may hurt performance in many applications
m30001| Thu Jun 14 01:22:26
m30001| Thu Jun 14 01:22:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:22:26
m30001| Thu Jun 14 01:22:27 [initandlisten] MongoDB starting : pid=20605 port=30001 dbpath=/data/db/add_shard21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:22:27 [initandlisten]
m30001| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:22:27 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:22:27 [initandlisten]
m30001| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:22:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:22:27 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:22:27 [initandlisten]
m30001| Thu Jun 14 01:22:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:22:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:22:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:22:27 [initandlisten] options: { dbpath: "/data/db/add_shard21", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 30001, smallfiles: true }
m30000| Thu Jun 14 01:22:27 [FileAllocator] done allocating datafile /data/db/add_shard20/config.1, size: 32MB, took 0.658 secs
m30001| Thu Jun 14 01:22:27 [initandlisten] waiting for connections on port 30001
Thu Jun 14 01:22:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/add_shard22 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m30001| Thu Jun 14 01:22:27 [initandlisten] connection accepted from 127.0.0.1:52156 #1 (1 connection now open)
m30002| note: noprealloc may hurt performance in many applications
m30002| Thu Jun 14 01:22:27
m30002| Thu Jun 14 01:22:27 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:22:27
m30002| Thu Jun 14 01:22:27 [initandlisten] MongoDB starting : pid=20619 port=30002 dbpath=/data/db/add_shard22 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:22:27 [initandlisten]
m30002| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:22:27 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:22:27 [initandlisten]
m30002| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:22:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:22:27 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:22:27 [initandlisten]
m30002| Thu Jun 14 01:22:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:22:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:22:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:22:27 [initandlisten] options: { dbpath: "/data/db/add_shard22", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 30002, smallfiles: true }
m30002| Thu Jun 14 01:22:27 [initandlisten] waiting for connections on port 30002
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-0'
m30002| Thu Jun 14 01:22:27 [initandlisten] connection accepted from 127.0.0.1:59513 #1 (1 connection now open)
Thu Jun 14 01:22:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Thu Jun 14 01:22:27
m31200| Thu Jun 14 01:22:27 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Thu Jun 14 01:22:27
m31200| Thu Jun 14 01:22:27 [initandlisten] MongoDB starting : pid=20631 port=31200 dbpath=/data/db/add_shard2_rs1-0 32-bit host=domU-12-31-39-01-70-B4
m31200| Thu Jun 14 01:22:27 [initandlisten]
m31200| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Thu Jun 14 01:22:27 [initandlisten] ** Not recommended for production.
m31200| Thu Jun 14 01:22:27 [initandlisten]
m31200| Thu Jun 14 01:22:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Thu Jun 14 01:22:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Thu Jun 14 01:22:27 [initandlisten] ** with --journal, the limit is lower
m31200| Thu Jun 14 01:22:27 [initandlisten]
m31200| Thu Jun 14 01:22:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Thu Jun 14 01:22:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Thu Jun 14 01:22:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31200| Thu Jun 14 01:22:27 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31200| Thu Jun 14 01:22:27 [initandlisten] waiting for connections on port 31200
m31200| Thu Jun 14 01:22:27 [websvr] admin web console waiting for connections on port 32200
m31200| Thu Jun 14 01:22:27 [initandlisten] connection accepted from 10.255.119.66:51444 #1 (1 connection now open)
m31200| Thu Jun 14 01:22:27 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Thu Jun 14 01:22:27 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to localhost:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-1'
m31200| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 127.0.0.1:48359 #2 (2 connections now open)
Thu Jun 14 01:22:28 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Thu Jun 14 01:22:28
m31201| Thu Jun 14 01:22:28 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Thu Jun 14 01:22:28
m31201| Thu Jun 14 01:22:28 [initandlisten] MongoDB starting : pid=20647 port=31201 dbpath=/data/db/add_shard2_rs1-1 32-bit host=domU-12-31-39-01-70-B4
m31201| Thu Jun 14 01:22:28 [initandlisten]
m31201| Thu Jun 14 01:22:28 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Thu Jun 14 01:22:28 [initandlisten] ** Not recommended for production.
m31201| Thu Jun 14 01:22:28 [initandlisten]
m31201| Thu Jun 14 01:22:28 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Thu Jun 14 01:22:28 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Thu Jun 14 01:22:28 [initandlisten] ** with --journal, the limit is lower
m31201| Thu Jun 14 01:22:28 [initandlisten]
m31201| Thu Jun 14 01:22:28 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Thu Jun 14 01:22:28 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Thu Jun 14 01:22:28 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31201| Thu Jun 14 01:22:28 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31201| Thu Jun 14 01:22:28 [initandlisten] waiting for connections on port 31201
m31201| Thu Jun 14 01:22:28 [websvr] admin web console waiting for connections on port 32201
m31201| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 10.255.119.66:41830 #1 (1 connection now open)
m31201| Thu Jun 14 01:22:28 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Thu Jun 14 01:22:28 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 127.0.0.1:44953 #2 (2 connections now open)
[ connection to localhost:31200, connection to localhost:31201 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-2'
Thu Jun 14 01:22:28 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31202 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Thu Jun 14 01:22:28
m31202| Thu Jun 14 01:22:28 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Thu Jun 14 01:22:28
m31202| Thu Jun 14 01:22:28 [initandlisten] MongoDB starting : pid=20663 port=31202 dbpath=/data/db/add_shard2_rs1-2 32-bit host=domU-12-31-39-01-70-B4
m31202| Thu Jun 14 01:22:28 [initandlisten]
m31202| Thu Jun 14 01:22:28 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Thu Jun 14 01:22:28 [initandlisten] ** Not recommended for production.
m31202| Thu Jun 14 01:22:28 [initandlisten]
m31202| Thu Jun 14 01:22:28 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Thu Jun 14 01:22:28 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Thu Jun 14 01:22:28 [initandlisten] ** with --journal, the limit is lower
m31202| Thu Jun 14 01:22:28 [initandlisten]
m31202| Thu Jun 14 01:22:28 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Thu Jun 14 01:22:28 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Thu Jun 14 01:22:28 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31202| Thu Jun 14 01:22:28 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-2", noprealloc: true, oplogSize: 40, port: 31202, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31202| Thu Jun 14 01:22:28 [websvr] admin web console waiting for connections on port 32202
m31202| Thu Jun 14 01:22:28 [initandlisten] waiting for connections on port 31202
m31202| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 10.255.119.66:35273 #1 (1 connection now open)
m31202| Thu Jun 14 01:22:28 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Thu Jun 14 01:22:28 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31202| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 127.0.0.1:46419 #2 (2 connections now open)
[
connection to localhost:31200,
connection to localhost:31201,
connection to localhost:31202
]
{
"replSetInitiate" : {
"_id" : "add_shard2_rs1",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31200"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31201"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31202"
}
]
}
}
m31200| Thu Jun 14 01:22:28 [conn2] replSet replSetInitiate admin command received from client
m31200| Thu Jun 14 01:22:28 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 10.255.119.66:41835 #3 (3 connections now open)
m31202| Thu Jun 14 01:22:28 [initandlisten] connection accepted from 10.255.119.66:35276 #3 (3 connections now open)
m31200| Thu Jun 14 01:22:28 [conn2] replSet replSetInitiate all members seem up
m31200| Thu Jun 14 01:22:28 [conn2] ******
m31200| Thu Jun 14 01:22:28 [conn2] creating replication oplog of size: 40MB...
m31200| Thu Jun 14 01:22:28 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-0/local.ns, filling with zeroes...
m31200| Thu Jun 14 01:22:28 [FileAllocator] creating directory /data/db/add_shard2_rs1-0/_tmp
m31200| Thu Jun 14 01:22:28 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-0/local.ns, size: 16MB, took 0.225 secs
m31200| Thu Jun 14 01:22:28 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-0/local.0, filling with zeroes...
m31200| Thu Jun 14 01:22:29 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-0/local.0, size: 64MB, took 1.288 secs
m31200| Thu Jun 14 01:22:30 [conn2] ******
m31200| Thu Jun 14 01:22:30 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Thu Jun 14 01:22:30 [conn2] replSet saveConfigLocally done
m31200| Thu Jun 14 01:22:30 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Thu Jun 14 01:22:30 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "add_shard2_rs1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31202" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1555461 w:34 reslen:112 1556ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Thu Jun 14 01:22:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' acquired, ts : 4fd9751c5e73225e7386c292
m30999| Thu Jun 14 01:22:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' unlocked.
m31200| Thu Jun 14 01:22:37 [rsStart] replSet I am domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:37 [rsStart] replSet STARTUP2
m31200| Thu Jun 14 01:22:37 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31200| Thu Jun 14 01:22:37 [rsSync] replSet SECONDARY
m31200| Thu Jun 14 01:22:37 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31200| Thu Jun 14 01:22:37 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31201| Thu Jun 14 01:22:38 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:38 [initandlisten] connection accepted from 10.255.119.66:51454 #3 (3 connections now open)
m31201| Thu Jun 14 01:22:38 [rsStart] replSet I am domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:22:38 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Thu Jun 14 01:22:38 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Thu Jun 14 01:22:38 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.ns, filling with zeroes...
m31201| Thu Jun 14 01:22:38 [FileAllocator] creating directory /data/db/add_shard2_rs1-1/_tmp
m31202| Thu Jun 14 01:22:38 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:38 [initandlisten] connection accepted from 10.255.119.66:51455 #4 (4 connections now open)
m31202| Thu Jun 14 01:22:38 [rsStart] replSet I am domU-12-31-39-01-70-B4:31202
m31202| Thu Jun 14 01:22:38 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Thu Jun 14 01:22:38 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Thu Jun 14 01:22:38 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.ns, filling with zeroes...
m31202| Thu Jun 14 01:22:38 [FileAllocator] creating directory /data/db/add_shard2_rs1-2/_tmp
m31201| Thu Jun 14 01:22:38 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.ns, size: 16MB, took 0.252 secs
m31201| Thu Jun 14 01:22:38 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.0, filling with zeroes...
m31201| Thu Jun 14 01:22:38 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.0, size: 16MB, took 0.604 secs
m31202| Thu Jun 14 01:22:38 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.ns, size: 16MB, took 0.597 secs
m31202| Thu Jun 14 01:22:38 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.0, filling with zeroes...
m31201| Thu Jun 14 01:22:39 [rsStart] replSet saveConfigLocally done
m31201| Thu Jun 14 01:22:39 [rsStart] replSet STARTUP2
m31201| Thu Jun 14 01:22:39 [rsSync] ******
m31201| Thu Jun 14 01:22:39 [rsSync] creating replication oplog of size: 40MB...
m31202| Thu Jun 14 01:22:39 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.0, size: 16MB, took 0.309 secs
m31201| Thu Jun 14 01:22:39 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.1, filling with zeroes...
m31200| Thu Jun 14 01:22:39 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m31200| Thu Jun 14 01:22:39 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31201 would veto
m31201| Thu Jun 14 01:22:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31201| Thu Jun 14 01:22:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31202| Thu Jun 14 01:22:40 [initandlisten] connection accepted from 10.255.119.66:35279 #4 (4 connections now open)
m31201| Thu Jun 14 01:22:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31201| Thu Jun 14 01:22:40 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.1, size: 64MB, took 1.149 secs
m31202| Thu Jun 14 01:22:40 [rsStart] replSet saveConfigLocally done
m31202| Thu Jun 14 01:22:40 [rsStart] replSet STARTUP2
m31202| Thu Jun 14 01:22:40 [rsSync] ******
m31202| Thu Jun 14 01:22:40 [rsSync] creating replication oplog of size: 40MB...
m31202| Thu Jun 14 01:22:40 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.1, filling with zeroes...
m31202| Thu Jun 14 01:22:41 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.1, size: 64MB, took 1.181 secs
m31201| Thu Jun 14 01:22:41 [rsSync] ******
m31201| Thu Jun 14 01:22:41 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:22:41 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31202| Thu Jun 14 01:22:41 [rsSync] ******
m31202| Thu Jun 14 01:22:41 [rsSync] replSet initial sync pending
m31202| Thu Jun 14 01:22:41 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31200| Thu Jun 14 01:22:41 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31200| Thu Jun 14 01:22:41 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31202 would veto
m31201| Thu Jun 14 01:22:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31202| Thu Jun 14 01:22:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31202| Thu Jun 14 01:22:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31201| Thu Jun 14 01:22:42 [initandlisten] connection accepted from 10.255.119.66:41840 #4 (4 connections now open)
m31202| Thu Jun 14 01:22:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31202| Thu Jun 14 01:22:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m30999| Thu Jun 14 01:22:46 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' acquired, ts : 4fd975265e73225e7386c293
m30999| Thu Jun 14 01:22:46 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' unlocked.
m31200| Thu Jun 14 01:22:47 [rsMgr] replSet info electSelf 0
m31202| Thu Jun 14 01:22:47 [conn3] replSet RECOVERING
m31202| Thu Jun 14 01:22:47 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31201| Thu Jun 14 01:22:47 [conn3] replSet RECOVERING
m31201| Thu Jun 14 01:22:47 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31200| Thu Jun 14 01:22:47 [rsMgr] replSet PRIMARY
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31203, 31204, 31205 ] 31203 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31203,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-0'
Thu Jun 14 01:22:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31203 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-0
m31203| note: noprealloc may hurt performance in many applications
m31203| Thu Jun 14 01:22:48
m31203| Thu Jun 14 01:22:48 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31203| Thu Jun 14 01:22:48
m31203| Thu Jun 14 01:22:48 [initandlisten] MongoDB starting : pid=20722 port=31203 dbpath=/data/db/add_shard2_rs2-0 32-bit host=domU-12-31-39-01-70-B4
m31203| Thu Jun 14 01:22:48 [initandlisten]
m31203| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31203| Thu Jun 14 01:22:48 [initandlisten] ** Not recommended for production.
m31203| Thu Jun 14 01:22:48 [initandlisten]
m31203| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31203| Thu Jun 14 01:22:48 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31203| Thu Jun 14 01:22:48 [initandlisten] ** with --journal, the limit is lower
m31203| Thu Jun 14 01:22:48 [initandlisten]
m31203| Thu Jun 14 01:22:48 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31203| Thu Jun 14 01:22:48 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31203| Thu Jun 14 01:22:48 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31203| Thu Jun 14 01:22:48 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-0", noprealloc: true, oplogSize: 40, port: 31203, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31203| Thu Jun 14 01:22:48 [initandlisten] waiting for connections on port 31203
m31203| Thu Jun 14 01:22:48 [websvr] admin web console waiting for connections on port 32203
m31203| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 10.255.119.66:39948 #1 (1 connection now open)
m31203| Thu Jun 14 01:22:48 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31203| Thu Jun 14 01:22:48 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Thu Jun 14 01:22:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31201| Thu Jun 14 01:22:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state RECOVERING
[ connection to localhost:31203 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31203, 31204, 31205 ] 31204 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31204,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-1'
m31203| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 127.0.0.1:44916 #2 (2 connections now open)
Thu Jun 14 01:22:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31204 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-1
m31204| note: noprealloc may hurt performance in many applications
m31204| Thu Jun 14 01:22:48
m31204| Thu Jun 14 01:22:48 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31204| Thu Jun 14 01:22:48
m31202| Thu Jun 14 01:22:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31202| Thu Jun 14 01:22:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
m31204| Thu Jun 14 01:22:48 [initandlisten] MongoDB starting : pid=20738 port=31204 dbpath=/data/db/add_shard2_rs2-1 32-bit host=domU-12-31-39-01-70-B4
m31204| Thu Jun 14 01:22:48 [initandlisten]
m31204| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31204| Thu Jun 14 01:22:48 [initandlisten] ** Not recommended for production.
m31204| Thu Jun 14 01:22:48 [initandlisten]
m31204| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31204| Thu Jun 14 01:22:48 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31204| Thu Jun 14 01:22:48 [initandlisten] ** with --journal, the limit is lower
m31204| Thu Jun 14 01:22:48 [initandlisten]
m31204| Thu Jun 14 01:22:48 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31204| Thu Jun 14 01:22:48 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31204| Thu Jun 14 01:22:48 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31204| Thu Jun 14 01:22:48 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-1", noprealloc: true, oplogSize: 40, port: 31204, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31204| Thu Jun 14 01:22:48 [websvr] admin web console waiting for connections on port 32204
m31204| Thu Jun 14 01:22:48 [initandlisten] waiting for connections on port 31204
m31204| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 10.255.119.66:47863 #1 (1 connection now open)
m31204| Thu Jun 14 01:22:48 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31204| Thu Jun 14 01:22:48 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31204| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 127.0.0.1:32998 #2 (2 connections now open)
[ connection to localhost:31203, connection to localhost:31204 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31203, 31204, 31205 ] 31205 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31205,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-2'
Thu Jun 14 01:22:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31205 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-2
m31205| note: noprealloc may hurt performance in many applications
m31205| Thu Jun 14 01:22:48
m31205| Thu Jun 14 01:22:48 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31205| Thu Jun 14 01:22:48
m31205| Thu Jun 14 01:22:48 [initandlisten] MongoDB starting : pid=20754 port=31205 dbpath=/data/db/add_shard2_rs2-2 32-bit host=domU-12-31-39-01-70-B4
m31205| Thu Jun 14 01:22:48 [initandlisten]
m31205| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31205| Thu Jun 14 01:22:48 [initandlisten] ** Not recommended for production.
m31205| Thu Jun 14 01:22:48 [initandlisten]
m31205| Thu Jun 14 01:22:48 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31205| Thu Jun 14 01:22:48 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31205| Thu Jun 14 01:22:48 [initandlisten] ** with --journal, the limit is lower
m31205| Thu Jun 14 01:22:48 [initandlisten]
m31205| Thu Jun 14 01:22:48 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31205| Thu Jun 14 01:22:48 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31205| Thu Jun 14 01:22:48 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31205| Thu Jun 14 01:22:48 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-2", noprealloc: true, oplogSize: 40, port: 31205, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31205| Thu Jun 14 01:22:48 [initandlisten] waiting for connections on port 31205
m31205| Thu Jun 14 01:22:48 [websvr] admin web console waiting for connections on port 32205
m31205| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 10.255.119.66:36899 #1 (1 connection now open)
m31205| Thu Jun 14 01:22:48 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31205| Thu Jun 14 01:22:48 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to localhost:31203,
connection to localhost:31204,
connection to localhost:31205
]
{
"replSetInitiate" : {
"_id" : "add_shard2_rs2",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31203"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31204"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31205"
}
]
}
}
m31205| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 127.0.0.1:56313 #2 (2 connections now open)
m31203| Thu Jun 14 01:22:48 [conn2] replSet replSetInitiate admin command received from client
m31203| Thu Jun 14 01:22:48 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31204| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 10.255.119.66:47868 #3 (3 connections now open)
m31205| Thu Jun 14 01:22:48 [initandlisten] connection accepted from 10.255.119.66:36902 #3 (3 connections now open)
m31203| Thu Jun 14 01:22:48 [conn2] replSet replSetInitiate all members seem up
m31203| Thu Jun 14 01:22:48 [conn2] ******
m31203| Thu Jun 14 01:22:48 [conn2] creating replication oplog of size: 40MB...
m31203| Thu Jun 14 01:22:48 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-0/local.ns, filling with zeroes...
m31203| Thu Jun 14 01:22:48 [FileAllocator] creating directory /data/db/add_shard2_rs2-0/_tmp
m31203| Thu Jun 14 01:22:48 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-0/local.ns, size: 16MB, took 0.251 secs
m31203| Thu Jun 14 01:22:48 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-0/local.0, filling with zeroes...
m31200| Thu Jun 14 01:22:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
m31200| Thu Jun 14 01:22:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state RECOVERING
m31203| Thu Jun 14 01:22:50 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-0/local.0, size: 64MB, took 1.16 secs
m31203| Thu Jun 14 01:22:50 [conn2] ******
m31203| Thu Jun 14 01:22:50 [conn2] replSet info saving a newer config version to local.system.replset
m31203| Thu Jun 14 01:22:50 [conn2] replSet saveConfigLocally done
m31203| Thu Jun 14 01:22:50 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31203| Thu Jun 14 01:22:50 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "add_shard2_rs2", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31203" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31204" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31205" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1455842 w:35 reslen:112 1454ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31201| Thu Jun 14 01:22:51 [conn3] end connection 10.255.119.66:41835 (3 connections now open)
m31201| Thu Jun 14 01:22:51 [initandlisten] connection accepted from 10.255.119.66:41852 #5 (4 connections now open)
m31200| Thu Jun 14 01:22:54 [conn3] end connection 10.255.119.66:51454 (3 connections now open)
m31200| Thu Jun 14 01:22:54 [initandlisten] connection accepted from 10.255.119.66:51470 #5 (4 connections now open)
m31200| Thu Jun 14 01:22:56 [conn4] end connection 10.255.119.66:51455 (3 connections now open)
m31200| Thu Jun 14 01:22:56 [initandlisten] connection accepted from 10.255.119.66:51471 #6 (4 connections now open)
m30999| Thu Jun 14 01:22:56 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' acquired, ts : 4fd975305e73225e7386c294
m30999| Thu Jun 14 01:22:56 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' unlocked.
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:22:57 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:57 [initandlisten] connection accepted from 10.255.119.66:51472 #7 (5 connections now open)
m31201| Thu Jun 14 01:22:57 [rsSync] build index local.me { _id: 1 }
m31201| Thu Jun 14 01:22:57 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync drop all databases
m31201| Thu Jun 14 01:22:57 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync clone all databases
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync data copy, starting syncup
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync building indexes
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync query minValid
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync finishing up
m31201| Thu Jun 14 01:22:57 [rsSync] replSet set minValid=4fd97516:1
m31201| Thu Jun 14 01:22:57 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Thu Jun 14 01:22:57 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:22:57 [rsSync] replSet initial sync done
m31200| Thu Jun 14 01:22:57 [conn7] end connection 10.255.119.66:51472 (4 connections now open)
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync pending
m31202| Thu Jun 14 01:22:57 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:57 [initandlisten] connection accepted from 10.255.119.66:51473 #8 (5 connections now open)
m31202| Thu Jun 14 01:22:57 [rsSync] build index local.me { _id: 1 }
m31202| Thu Jun 14 01:22:57 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync drop all databases
m31202| Thu Jun 14 01:22:57 [rsSync] dropAllDatabasesExceptLocal 1
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync clone all databases
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync data copy, starting syncup
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync building indexes
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync query minValid
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync finishing up
m31202| Thu Jun 14 01:22:57 [rsSync] replSet set minValid=4fd97516:1
m31202| Thu Jun 14 01:22:57 [rsSync] build index local.replset.minvalid { _id: 1 }
m31202| Thu Jun 14 01:22:57 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:22:57 [rsSync] replSet initial sync done
m31200| Thu Jun 14 01:22:57 [conn8] end connection 10.255.119.66:51473 (4 connections now open)
m31203| Thu Jun 14 01:22:58 [rsStart] replSet I am domU-12-31-39-01-70-B4:31203
m31203| Thu Jun 14 01:22:58 [rsStart] replSet STARTUP2
m31203| Thu Jun 14 01:22:58 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is up
m31203| Thu Jun 14 01:22:58 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is up
m31203| Thu Jun 14 01:22:58 [rsSync] replSet SECONDARY
m31201| Thu Jun 14 01:22:58 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:51474 #9 (5 connections now open)
m31204| Thu Jun 14 01:22:58 [rsStart] trying to contact domU-12-31-39-01-70-B4:31203
m31203| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:39964 #3 (3 connections now open)
m31204| Thu Jun 14 01:22:58 [rsStart] replSet I am domU-12-31-39-01-70-B4:31204
m31204| Thu Jun 14 01:22:58 [rsStart] replSet got config version 1 from a remote, saving locally
m31204| Thu Jun 14 01:22:58 [rsStart] replSet info saving a newer config version to local.system.replset
m31204| Thu Jun 14 01:22:58 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.ns, filling with zeroes...
m31204| Thu Jun 14 01:22:58 [FileAllocator] creating directory /data/db/add_shard2_rs2-1/_tmp
m31202| Thu Jun 14 01:22:58 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:51476 #10 (6 connections now open)
m31205| Thu Jun 14 01:22:58 [rsStart] trying to contact domU-12-31-39-01-70-B4:31203
m31203| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:39966 #4 (4 connections now open)
m31205| Thu Jun 14 01:22:58 [rsStart] replSet I am domU-12-31-39-01-70-B4:31205
m31205| Thu Jun 14 01:22:58 [rsStart] replSet got config version 1 from a remote, saving locally
m31205| Thu Jun 14 01:22:58 [rsStart] replSet info saving a newer config version to local.system.replset
m31205| Thu Jun 14 01:22:58 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.ns, filling with zeroes...
m31205| Thu Jun 14 01:22:58 [FileAllocator] creating directory /data/db/add_shard2_rs2-2/_tmp
m31201| Thu Jun 14 01:22:58 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:51478 #11 (7 connections now open)
m31201| Thu Jun 14 01:22:58 [rsSync] replSet SECONDARY
m31204| Thu Jun 14 01:22:58 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.ns, size: 16MB, took 0.244 secs
m31204| Thu Jun 14 01:22:58 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.0, filling with zeroes...
m31202| Thu Jun 14 01:22:58 [rsSync] replSet SECONDARY
m31202| Thu Jun 14 01:22:58 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:22:58 [initandlisten] connection accepted from 10.255.119.66:51479 #12 (8 connections now open)
m31205| Thu Jun 14 01:22:59 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.ns, size: 16MB, took 0.491 secs
m31205| Thu Jun 14 01:22:59 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.0, filling with zeroes...
m31204| Thu Jun 14 01:22:59 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.0, size: 16MB, took 0.507 secs
m31204| Thu Jun 14 01:22:59 [rsStart] replSet saveConfigLocally done
m31205| Thu Jun 14 01:22:59 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.0, size: 16MB, took 0.29 secs
m31204| Thu Jun 14 01:22:59 [rsStart] replSet STARTUP2
m31204| Thu Jun 14 01:22:59 [rsSync] ******
m31204| Thu Jun 14 01:22:59 [rsSync] creating replication oplog of size: 40MB...
m31204| Thu Jun 14 01:22:59 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.1, filling with zeroes...
m31200| Thu Jun 14 01:22:59 [slaveTracking] build index local.slaves { _id: 1 }
m31200| Thu Jun 14 01:22:59 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:22:59 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
m31200| Thu Jun 14 01:22:59 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state SECONDARY
m31203| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is now in state STARTUP2
m31203| Thu Jun 14 01:23:00 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31204 would veto
m31201| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state SECONDARY
m31204| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is up
m31204| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is now in state SECONDARY
m31205| Thu Jun 14 01:23:00 [initandlisten] connection accepted from 10.255.119.66:36914 #4 (4 connections now open)
m31204| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is up
m31202| Thu Jun 14 01:23:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
m31205| Thu Jun 14 01:23:00 [rsStart] replSet saveConfigLocally done
m31205| Thu Jun 14 01:23:00 [rsStart] replSet STARTUP2
m31205| Thu Jun 14 01:23:00 [rsSync] ******
m31205| Thu Jun 14 01:23:00 [rsSync] creating replication oplog of size: 40MB...
m31204| Thu Jun 14 01:23:00 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.1, size: 64MB, took 1.264 secs
m31205| Thu Jun 14 01:23:00 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.1, filling with zeroes...
m31203| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is now in state STARTUP2
m31203| Thu Jun 14 01:23:02 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31205 would veto
m31205| Thu Jun 14 01:23:02 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.1, size: 64MB, took 1.49 secs
m31204| Thu Jun 14 01:23:02 [rsSync] ******
m31204| Thu Jun 14 01:23:02 [rsSync] replSet initial sync pending
m31204| Thu Jun 14 01:23:02 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31205| Thu Jun 14 01:23:02 [rsSync] ******
m31205| Thu Jun 14 01:23:02 [rsSync] replSet initial sync pending
m31205| Thu Jun 14 01:23:02 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31204| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is now in state STARTUP2
m31205| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is up
m31205| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is now in state SECONDARY
m31204| Thu Jun 14 01:23:02 [initandlisten] connection accepted from 10.255.119.66:47882 #4 (4 connections now open)
m31205| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is up
m31205| Thu Jun 14 01:23:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is now in state STARTUP2
m31202| Thu Jun 14 01:23:05 [conn3] end connection 10.255.119.66:35276 (3 connections now open)
m31202| Thu Jun 14 01:23:05 [initandlisten] connection accepted from 10.255.119.66:35305 #5 (4 connections now open)
m30999| Thu Jun 14 01:23:06 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' acquired, ts : 4fd9753a5e73225e7386c295
m30999| Thu Jun 14 01:23:06 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651346:1804289383' unlocked.
m31203| Thu Jun 14 01:23:08 [rsMgr] replSet info electSelf 0
m31205| Thu Jun 14 01:23:08 [conn3] replSet RECOVERING
m31205| Thu Jun 14 01:23:08 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31203 (0)
m31204| Thu Jun 14 01:23:08 [conn3] replSet RECOVERING
m31204| Thu Jun 14 01:23:08 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31203 (0)
m31203| Thu Jun 14 01:23:08 [rsMgr] replSet PRIMARY
m31202| Thu Jun 14 01:23:08 [conn4] end connection 10.255.119.66:35279 (3 connections now open)
m31202| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:35306 #6 (4 connections now open)
m30001| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:44162 #2 (2 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] going to add shard: { _id: "bar", host: "domU-12-31-39-01-70-B4:30001" }
m30001| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:44163 #3 (3 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:30001 serverID: 4fd975125e73225e7386c290
m30999| Thu Jun 14 01:23:08 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:30000 serverID: 4fd975125e73225e7386c290
m30000| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:40649 #7 (7 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] starting new replica set monitor for replica set add_shard2_rs1 with seed of domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set add_shard2_rs1
m31200| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:51487 #13 (9 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31200", 1: "domU-12-31-39-01-70-B4:31202", 2: "domU-12-31-39-01-70-B4:31201" } from add_shard2_rs1/
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set add_shard2_rs1
m31200| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:51488 #14 (10 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set add_shard2_rs1
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31201 to replica set add_shard2_rs1
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31201 in replica set add_shard2_rs1
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31202 to replica set add_shard2_rs1
m31201| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:41872 #6 (5 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31202 in replica set add_shard2_rs1
m31202| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:35313 #7 (5 connections now open)
m31200| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:51491 #15 (11 connections now open)
m31200| Thu Jun 14 01:23:08 [conn13] end connection 10.255.119.66:51487 (10 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] Primary for replica set add_shard2_rs1 changed to domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:41875 #7 (6 connections now open)
m31202| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:35316 #8 (6 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] replica set monitor for replica set add_shard2_rs1 started, address is add_shard2_rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m30999| Thu Jun 14 01:23:08 [ReplicaSetMonitorWatcher] starting
m31200| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:51494 #16 (11 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] going to add shard: { _id: "add_shard2_rs1", host: "add_shard2_rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202" }
m30999| Thu Jun 14 01:23:08 [conn] starting new replica set monitor for replica set add_shard2_rs2 with seed of domU-12-31-39-01-70-B4:31203
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31203 for replica set add_shard2_rs2
m31203| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:39984 #5 (5 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31203", 1: "domU-12-31-39-01-70-B4:31205", 2: "domU-12-31-39-01-70-B4:31204" } from add_shard2_rs2/
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31203 to replica set add_shard2_rs2
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31203 in replica set add_shard2_rs2
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31204 to replica set add_shard2_rs2
m31203| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:39985 #6 (6 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31204 in replica set add_shard2_rs2
m30999| Thu Jun 14 01:23:08 [conn] trying to add new host domU-12-31-39-01-70-B4:31205 to replica set add_shard2_rs2
m31204| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:47898 #5 (5 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31205 in replica set add_shard2_rs2
m31205| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:36932 #5 (5 connections now open)
m31203| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:39988 #7 (7 connections now open)
m31203| Thu Jun 14 01:23:08 [conn5] end connection 10.255.119.66:39984 (6 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] Primary for replica set add_shard2_rs2 changed to domU-12-31-39-01-70-B4:31203
m31204| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:47901 #6 (6 connections now open)
m31205| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:36935 #6 (6 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] replica set monitor for replica set add_shard2_rs2 started, address is add_shard2_rs2/domU-12-31-39-01-70-B4:31203,domU-12-31-39-01-70-B4:31204,domU-12-31-39-01-70-B4:31205
m31203| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:39991 #8 (7 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] going to add shard: { _id: "myshard", host: "add_shard2_rs2/domU-12-31-39-01-70-B4:31203,domU-12-31-39-01-70-B4:31204,domU-12-31-39-01-70-B4:31205" }
m30002| Thu Jun 14 01:23:08 [initandlisten] connection accepted from 10.255.119.66:42372 #2 (2 connections now open)
m30999| Thu Jun 14 01:23:08 [conn] going to add shard: { _id: "shard0001", host: "domU-12-31-39-01-70-B4:30002" }
m30999| Thu Jun 14 01:23:08 [conn] addshard request { addshard: "add_shard2_rs2/NonExistingHost:31203" } failed: in seed list add_shard2_rs2/NonExistingHost:31203, host NonExistingHost:31203 does not belong to replica set add_shard2_rs2
m30999| Thu Jun 14 01:23:08 [conn] addshard request { addshard: "add_shard2_rs2/domU-12-31-39-01-70-B4:31203,foo:9999" } failed: in seed list add_shard2_rs2/domU-12-31-39-01-70-B4:31203,foo:9999, host foo:9999 does not belong to replica set add_shard2_rs2
m30999| Thu Jun 14 01:23:08 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:23:08 [conn3] end connection 10.255.119.66:40593 (6 connections now open)
m30000| Thu Jun 14 01:23:08 [conn4] end connection 10.255.119.66:40596 (5 connections now open)
m30000| Thu Jun 14 01:23:08 [conn6] end connection 10.255.119.66:40598 (4 connections now open)
m30001| Thu Jun 14 01:23:08 [conn3] end connection 10.255.119.66:44163 (2 connections now open)
m30000| Thu Jun 14 01:23:08 [conn7] end connection 10.255.119.66:40649 (3 connections now open)
m31204| Thu Jun 14 01:23:08 [conn6] end connection 10.255.119.66:47901 (5 connections now open)
m31203| Thu Jun 14 01:23:08 [conn6] end connection 10.255.119.66:39985 (6 connections now open)
m31204| Thu Jun 14 01:23:08 [conn5] end connection 10.255.119.66:47898 (4 connections now open)
m31202| Thu Jun 14 01:23:08 [conn7] end connection 10.255.119.66:35313 (5 connections now open)
m31202| Thu Jun 14 01:23:08 [conn8] end connection 10.255.119.66:35316 (4 connections now open)
m31200| Thu Jun 14 01:23:08 [conn14] end connection 10.255.119.66:51488 (10 connections now open)
m31200| Thu Jun 14 01:23:08 [conn15] end connection 10.255.119.66:51491 (9 connections now open)
m31200| Thu Jun 14 01:23:08 [conn16] end connection 10.255.119.66:51494 (8 connections now open)
m31201| Thu Jun 14 01:23:08 [conn7] end connection 10.255.119.66:41875 (5 connections now open)
m31201| Thu Jun 14 01:23:08 [conn6] end connection 10.255.119.66:41872 (4 connections now open)
m31203| Thu Jun 14 01:23:08 [conn7] end connection 10.255.119.66:39988 (5 connections now open)
m31205| Thu Jun 14 01:23:08 [conn6] end connection 10.255.119.66:36935 (5 connections now open)
m31203| Thu Jun 14 01:23:08 [conn8] end connection 10.255.119.66:39991 (4 connections now open)
m31205| Thu Jun 14 01:23:08 [conn5] end connection 10.255.119.66:36932 (4 connections now open)
m30002| Thu Jun 14 01:23:08 [conn2] end connection 10.255.119.66:42372 (1 connection now open)
m31204| Thu Jun 14 01:23:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is now in state RECOVERING
m31204| Thu Jun 14 01:23:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is now in state PRIMARY
m31205| Thu Jun 14 01:23:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is now in state RECOVERING
m31205| Thu Jun 14 01:23:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is now in state PRIMARY
Thu Jun 14 01:23:09 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:23:09 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:23:09 [interruptThread] now exiting
m30000| Thu Jun 14 01:23:09 dbexit:
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:23:09 [interruptThread] closing listening socket: 10
m30000| Thu Jun 14 01:23:09 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:23:09 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:23:09 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:23:09 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:23:09 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:23:09 dbexit: really exiting now
m31203| Thu Jun 14 01:23:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31205 is now in state RECOVERING
m31203| Thu Jun 14 01:23:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is now in state RECOVERING
Thu Jun 14 01:23:10 shell: stopped mongo program on port 30000
*** ShardingTest add_shard2 completed successfully in 44.287 seconds ***
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
m31200| Thu Jun 14 01:23:10 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Thu Jun 14 01:23:10 [interruptThread] now exiting
m31200| Thu Jun 14 01:23:10 dbexit:
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: going to close listening sockets...
m31200| Thu Jun 14 01:23:10 [interruptThread] closing listening socket: 23
m31200| Thu Jun 14 01:23:10 [interruptThread] closing listening socket: 24
m31200| Thu Jun 14 01:23:10 [interruptThread] closing listening socket: 25
m31200| Thu Jun 14 01:23:10 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: going to flush diaglog...
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: going to close sockets...
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: closing all files...
m31201| Thu Jun 14 01:23:10 [conn5] end connection 10.255.119.66:41852 (3 connections now open)
m31201| Thu Jun 14 01:23:10 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:23:10 [conn1] end connection 10.255.119.66:51444 (7 connections now open)
m31202| Thu Jun 14 01:23:10 [conn5] end connection 10.255.119.66:35305 (3 connections now open)
m31202| Thu Jun 14 01:23:10 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:23:10 [interruptThread] closeAllFiles() finished
m31200| Thu Jun 14 01:23:10 [interruptThread] shutdown: removing fs lock...
m31200| Thu Jun 14 01:23:10 dbexit: really exiting now
m31202| Thu Jun 14 01:23:10 [rsHealthPoll] DBClientCursor::init call() failed
m31202| Thu Jun 14 01:23:10 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31200 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31200 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31202" }
m31202| Thu Jun 14 01:23:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state DOWN
m31201| Thu Jun 14 01:23:10 [conn4] end connection 10.255.119.66:41840 (2 connections now open)
m31201| Thu Jun 14 01:23:10 [initandlisten] connection accepted from 10.255.119.66:41887 #8 (3 connections now open)
m31202| Thu Jun 14 01:23:10 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31201 would veto
Thu Jun 14 01:23:11 shell: stopped mongo program on port 31200
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
ReplSetTest stop *** Shutting down mongod in port 31201 ***
m31201| Thu Jun 14 01:23:11 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Thu Jun 14 01:23:11 [interruptThread] now exiting
m31201| Thu Jun 14 01:23:11 dbexit:
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: going to close listening sockets...
m31201| Thu Jun 14 01:23:11 [interruptThread] closing listening socket: 26
m31201| Thu Jun 14 01:23:11 [interruptThread] closing listening socket: 27
m31201| Thu Jun 14 01:23:11 [interruptThread] closing listening socket: 29
m31201| Thu Jun 14 01:23:11 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: going to flush diaglog...
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: going to close sockets...
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: closing all files...
m31202| Thu Jun 14 01:23:11 [conn6] end connection 10.255.119.66:35306 (2 connections now open)
m31201| Thu Jun 14 01:23:11 [conn1] end connection 10.255.119.66:41830 (2 connections now open)
m31201| Thu Jun 14 01:23:11 [interruptThread] closeAllFiles() finished
m31201| Thu Jun 14 01:23:11 [interruptThread] shutdown: removing fs lock...
m31201| Thu Jun 14 01:23:11 dbexit: really exiting now
m31204| Thu Jun 14 01:23:12 [conn3] end connection 10.255.119.66:47868 (3 connections now open)
m31204| Thu Jun 14 01:23:12 [initandlisten] connection accepted from 10.255.119.66:47906 #7 (4 connections now open)
Thu Jun 14 01:23:12 shell: stopped mongo program on port 31201
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
ReplSetTest stop *** Shutting down mongod in port 31202 ***
m31202| Thu Jun 14 01:23:12 got signal 15 (Terminated), will terminate after current cmd ends
m31202| Thu Jun 14 01:23:12 [interruptThread] now exiting
m31202| Thu Jun 14 01:23:12 dbexit:
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: going to close listening sockets...
m31202| Thu Jun 14 01:23:12 [interruptThread] closing listening socket: 30
m31202| Thu Jun 14 01:23:12 [interruptThread] closing listening socket: 31
m31202| Thu Jun 14 01:23:12 [interruptThread] closing listening socket: 32
m31202| Thu Jun 14 01:23:12 [interruptThread] removing socket file: /tmp/mongodb-31202.sock
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: going to flush diaglog...
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: going to close sockets...
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: waiting for fs preallocator...
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: closing all files...
m31202| Thu Jun 14 01:23:12 [interruptThread] closeAllFiles() finished
m31202| Thu Jun 14 01:23:12 [interruptThread] shutdown: removing fs lock...
m31202| Thu Jun 14 01:23:12 dbexit: really exiting now
Thu Jun 14 01:23:13 shell: stopped mongo program on port 31202
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
ReplSetTest n: 0 ports: [ 31203, 31204, 31205 ] 31203 number
ReplSetTest stop *** Shutting down mongod in port 31203 ***
m31203| Thu Jun 14 01:23:13 got signal 15 (Terminated), will terminate after current cmd ends
m31203| Thu Jun 14 01:23:13 [interruptThread] now exiting
m31203| Thu Jun 14 01:23:13 dbexit:
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: going to close listening sockets...
m31203| Thu Jun 14 01:23:13 [interruptThread] closing listening socket: 32
m31203| Thu Jun 14 01:23:13 [interruptThread] closing listening socket: 33
m31203| Thu Jun 14 01:23:13 [interruptThread] closing listening socket: 35
m31203| Thu Jun 14 01:23:13 [interruptThread] removing socket file: /tmp/mongodb-31203.sock
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: going to flush diaglog...
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: going to close sockets...
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: waiting for fs preallocator...
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: closing all files...
m31203| Thu Jun 14 01:23:13 [interruptThread] closeAllFiles() finished
m31203| Thu Jun 14 01:23:13 [interruptThread] shutdown: removing fs lock...
m31203| Thu Jun 14 01:23:13 dbexit: really exiting now
m31205| Thu Jun 14 01:23:13 [conn3] end connection 10.255.119.66:36902 (3 connections now open)
m31204| Thu Jun 14 01:23:13 [conn7] end connection 10.255.119.66:47906 (3 connections now open)
Thu Jun 14 01:23:14 shell: stopped mongo program on port 31203
ReplSetTest n: 1 ports: [ 31203, 31204, 31205 ] 31204 number
ReplSetTest stop *** Shutting down mongod in port 31204 ***
m31204| Thu Jun 14 01:23:14 got signal 15 (Terminated), will terminate after current cmd ends
m31204| Thu Jun 14 01:23:14 [interruptThread] now exiting
m31204| Thu Jun 14 01:23:14 dbexit:
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: going to close listening sockets...
m31204| Thu Jun 14 01:23:14 [interruptThread] closing listening socket: 36
m31204| Thu Jun 14 01:23:14 [interruptThread] closing listening socket: 37
m31204| Thu Jun 14 01:23:14 [interruptThread] closing listening socket: 38
m31204| Thu Jun 14 01:23:14 [interruptThread] removing socket file: /tmp/mongodb-31204.sock
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: going to flush diaglog...
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: going to close sockets...
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: waiting for fs preallocator...
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: closing all files...
m31204| Thu Jun 14 01:23:14 [interruptThread] closeAllFiles() finished
m31204| Thu Jun 14 01:23:14 [interruptThread] shutdown: removing fs lock...
m31204| Thu Jun 14 01:23:14 dbexit: really exiting now
m31205| Thu Jun 14 01:23:14 [conn4] end connection 10.255.119.66:36914 (2 connections now open)
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] DBClientCursor::init call() failed
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31204 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31204 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs2", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31205" }
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31204 is now in state DOWN
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] DBClientCursor::init call() failed
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31203 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31203 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs2", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31205" }
m31205| Thu Jun 14 01:23:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31203 is now in state DOWN
Thu Jun 14 01:23:15 shell: stopped mongo program on port 31204
ReplSetTest n: 2 ports: [ 31203, 31204, 31205 ] 31205 number
ReplSetTest stop *** Shutting down mongod in port 31205 ***
m31205| Thu Jun 14 01:23:15 got signal 15 (Terminated), will terminate after current cmd ends
m31205| Thu Jun 14 01:23:15 [interruptThread] now exiting
m31205| Thu Jun 14 01:23:15 dbexit:
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: going to close listening sockets...
m31205| Thu Jun 14 01:23:15 [interruptThread] closing listening socket: 38
m31205| Thu Jun 14 01:23:15 [interruptThread] closing listening socket: 39
m31205| Thu Jun 14 01:23:15 [interruptThread] closing listening socket: 41
m31205| Thu Jun 14 01:23:15 [interruptThread] removing socket file: /tmp/mongodb-31205.sock
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: going to flush diaglog...
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: going to close sockets...
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: waiting for fs preallocator...
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: closing all files...
m31205| Thu Jun 14 01:23:15 [interruptThread] closeAllFiles() finished
m31205| Thu Jun 14 01:23:15 [interruptThread] shutdown: removing fs lock...
m31205| Thu Jun 14 01:23:15 dbexit: really exiting now
Thu Jun 14 01:23:16 shell: stopped mongo program on port 31205
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m30001| Thu Jun 14 01:23:16 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:23:16 [interruptThread] now exiting
m30001| Thu Jun 14 01:23:16 dbexit:
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:23:16 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:23:16 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:23:16 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:23:16 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:23:16 [interruptThread] shutdown: removing fs lock...
m30001| Thu Thu Jun 14 01:23:16 [clientcursormon] mem (MB) res:16 virt:108 mapped:0
Jun 14 01:23:16 dbexit: really exiting now
m30002| Thu Jun 14 01:23:17 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:23:17 [interruptThread] now exiting
m30002| Thu Jun 14 01:23:17 dbexit:
m30002| Thu Jun 14 01:23:17 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:23:17 [interruptThread] closing listening socket: 20
m30002| Thu Jun 14 01:23:17 [interruptThread] closing listening socket: 21
m30002| Thu Jun 14 01:23:17 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:23:17 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:23:17 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:23:17 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:23:17 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:23:17 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:23:17 [interruptThread] 52489.073038ms
Thu Jun 14 01:23:18 [initandlisten] connection accepted from 127.0.0.1:42161 #3 (2 connections now open)
*******************************************
Test : addshard3.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard3.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard3.js";TestData.testFile = "addshard3.js";TestData.testName = "addshard3";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:23:18 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard30'
Thu Jun 14 01:23:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/add_shard30
m30000| Thu Jun 14 01:23:18
m30000| Thu Jun 14 01:23:18 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:23:18
m30000| Thu Jun 14 01:23:18 [initandlisten] MongoDB starting : pid=20893 port=30000 dbpath=/data/db/add_shard30 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:23:18 [initandlisten]
m30000| Thu Jun 14 01:23:18 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:23:18 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:23:18 [initandlisten]
m30000| Thu Jun 14 01:23:18 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:23:18 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:23:18 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:23:18 [initandlisten]
m30000| Thu Jun 14 01:23:18 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:23:18 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:23:18 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:23:18 [initandlisten] options: { dbpath: "/data/db/add_shard30", port: 30000 }
m30000| Thu Jun 14 01:23:18 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:23:18 [websvr] admin web console waiting for connections on port 31000
"localhost:30000"
m30000| Thu Jun 14 01:23:18 [initandlisten] connection accepted from 127.0.0.1:53748 #1 (1 connection now open)
ShardingTest add_shard3 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
Thu Jun 14 01:23:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:23:18 [initandlisten] connection accepted from 127.0.0.1:53749 #2 (2 connections now open)
m30000| Thu Jun 14 01:23:18 [FileAllocator] allocating new datafile /data/db/add_shard30/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:23:18 [FileAllocator] creating directory /data/db/add_shard30/_tmp
m30999| Thu Jun 14 01:23:18 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:23:18 [mongosMain] MongoS version 2.1.2-pre- starting: pid=20908 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:23:18 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:23:18 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:23:18 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:23:18 [initandlisten] connection accepted from 127.0.0.1:53751 #3 (3 connections now open)
m30000| Thu Jun 14 01:23:19 [FileAllocator] done allocating datafile /data/db/add_shard30/config.ns, size: 16MB, took 0.275 secs
m30000| Thu Jun 14 01:23:19 [FileAllocator] allocating new datafile /data/db/add_shard30/config.0, filling with zeroes...
m30000| Thu Jun 14 01:23:19 [FileAllocator] done allocating datafile /data/db/add_shard30/config.0, size: 16MB, took 0.258 secs
m30000| Thu Jun 14 01:23:19 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn2] insert config.settings keyUpdates:0 locks(micros) w:555668 555ms
m30000| Thu Jun 14 01:23:19 [FileAllocator] allocating new datafile /data/db/add_shard30/config.1, filling with zeroes...
m30000| Thu Jun 14 01:23:19 [initandlisten] connection accepted from 127.0.0.1:53754 #4 (4 connections now open)
m30000| Thu Jun 14 01:23:19 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:19 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:23:19 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:23:19 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:23:19 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:23:19 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:23:19 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:19 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:23:19 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:23:19
m30999| Thu Jun 14 01:23:19 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:23:19 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [initandlisten] connection accepted from 127.0.0.1:53755 #5 (5 connections now open)
m30000| Thu Jun 14 01:23:19 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651399:1804289383' acquired, ts : 4fd97547e9e4fd2acdde310d
m30999| Thu Jun 14 01:23:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651399:1804289383' unlocked.
m30999| Thu Jun 14 01:23:19 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651399:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:23:19 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:19 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:23:19 [mongosMain] connection accepted from 127.0.0.1:52606 #1 (1 connection now open)
m30999| Thu Jun 14 01:23:19 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:23:19 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:23:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:19 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:23:19 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
m30000| Thu Jun 14 01:23:19 [FileAllocator] done allocating datafile /data/db/add_shard30/config.1, size: 32MB, took 0.592 secs
m30999| Thu Jun 14 01:23:27 [conn] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:23:27 [conn] addshard request { addshard: "localhost:31000" } failed: couldn't connect to new shard DBClientBase::findN: transport error: localhost:31000 ns: admin.$cmd query: { getlasterror: 1 }
{
"ok" : 0,
"errmsg" : "couldn't connect to new shard DBClientBase::findN: transport error: localhost:31000 ns: admin.$cmd query: { getlasterror: 1 }"
}
m30000| Thu Jun 14 01:23:27 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:23:27 [interruptThread] now exiting
m30000| Thu Jun 14 01:23:27 dbexit:
m30000| Thu Jun 14 01:23:27 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:23:27 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:23:27 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:23:27 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:23:27 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:23:27 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:23:27 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:23:27 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:23:27 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:23:27 [interruptThread] closeAllFiles() finished
m30000| Thu J 9893.828869ms
Thu Jun 14 01:23:28 [initandlisten] connection accepted from 127.0.0.1:42173 #4 (3 connections now open)
*******************************************
Test : addshard4.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard4.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard4.js";TestData.testFile = "addshard4.js";TestData.testName = "addshard4";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:23:28 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/addshard40'
Thu Jun 14 01:23:28 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/addshard40
m30000| Thu Jun 14 01:23:28
m30000| Thu Jun 14 01:23:28 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:23:28
m30000| Thu Jun 14 01:23:28 [initandlisten] MongoDB starting : pid=20931 port=30000 dbpath=/data/db/addshard40 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:23:28 [initandlisten]
m30000| Thu Jun 14 01:23:28 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:23:28 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:23:28 [initandlisten]
m30000| Thu Jun 14 01:23:28 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:23:28 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:23:28 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:23:28 [initandlisten]
m30000| Thu Jun 14 01:23:28 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:23:28 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:23:28 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:23:28 [initandlisten] options: { dbpath: "/data/db/addshard40", port: 30000 }
m30000| Thu Jun 14 01:23:28 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:23:28 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/addshard41'
m30000| Thu Jun 14 01:23:28 [initandlisten] connection accepted from 127.0.0.1:53760 #1 (1 connection now open)
Thu Jun 14 01:23:28 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/addshard41
m30001| Thu Jun 14 01:23:28
m30001| Thu Jun 14 01:23:28 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:23:28
m30001| Thu Jun 14 01:23:28 [initandlisten] MongoDB starting : pid=20944 port=30001 dbpath=/data/db/addshard41 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:23:28 [initandlisten]
m30001| Thu Jun 14 01:23:28 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:23:28 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:23:28 [initandlisten]
m30001| Thu Jun 14 01:23:28 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:23:28 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:23:28 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:23:28 [initandlisten]
m30001| Thu Jun 14 01:23:28 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:23:28 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:23:28 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:23:28 [initandlisten] options: { dbpath: "/data/db/addshard41", port: 30001 }
m30001| Thu Jun 14 01:23:28 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:23:28 [websvr] admin web console waiting for connections on port 31001
"domU-12-31-39-01-70-B4:30000"
m30001| Thu Jun 14 01:23:28 [initandlisten] connection accepted from 127.0.0.1:52238 #1 (1 connection now open)
ShardingTest addshard4 :
{
"config" : "domU-12-31-39-01-70-B4:30000",
"shards" : [
connection to domU-12-31-39-01-70-B4:30000,
connection to domU-12-31-39-01-70-B4:30001
]
}
m30000| Thu Jun 14 01:23:28 [initandlisten] connection accepted from 10.255.119.66:40686 #2 (2 connections now open)
m30000| Thu Jun 14 01:23:28 [FileAllocator] allocating new datafile /data/db/addshard40/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:23:28 [FileAllocator] creating directory /data/db/addshard40/_tmp
Thu Jun 14 01:23:28 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:23:28 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:23:28 [mongosMain] MongoS version 2.1.2-pre- starting: pid=20959 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:23:28 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:23:28 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:23:28 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:30000", port: 30999 }
m30000| Thu Jun 14 01:23:28 [initandlisten] connection accepted from 10.255.119.66:40688 #3 (3 connections now open)
m30000| Thu Jun 14 01:23:29 [FileAllocator] done allocating datafile /data/db/addshard40/config.ns, size: 16MB, took 0.31 secs
m30000| Thu Jun 14 01:23:29 [FileAllocator] allocating new datafile /data/db/addshard40/config.0, filling with zeroes...
m30000| Thu Jun 14 01:23:29 [FileAllocator] done allocating datafile /data/db/addshard40/config.0, size: 16MB, took 0.313 secs
m30000| Thu Jun 14 01:23:29 [FileAllocator] allocating new datafile /data/db/addshard40/config.1, filling with zeroes...
m30000| Thu Jun 14 01:23:29 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [initandlisten] connection accepted from 10.255.119.66:40691 #4 (4 connections now open)
m30000| Thu Jun 14 01:23:29 [conn2] insert config.settings keyUpdates:0 locks(micros) w:643086 642ms
m30000| Thu Jun 14 01:23:29 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:23:29 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:23:29 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:29 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:23:29 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:23:29 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:23:29 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:23:29 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:23:29
m30999| Thu Jun 14 01:23:29 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:23:29 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [initandlisten] connection accepted from 10.255.119.66:40692 #5 (5 connections now open)
m30000| Thu Jun 14 01:23:29 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:23:29 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:23:29 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:30000 and process domU-12-31-39-01-70-B4:30999:1339651409:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:23:29 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:23:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd97551e6c4b45cdb309e2c
m30999| Thu Jun 14 01:23:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
ShardingTest undefined going to add shard : domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:23:29 [mongosMain] connection accepted from 127.0.0.1:52620 #1 (1 connection now open)
m30999| Thu Jun 14 01:23:29 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:23:29 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:23:29 [conn3] build index done. scanned 0 total records. 0.011 secs
m30999| Thu Jun 14 01:23:29 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:23:29 [conn] going to add shard: { _id: "shard0000", host: "domU-12-31-39-01-70-B4:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : domU-12-31-39-01-70-B4:30001
m30001| Thu Jun 14 01:23:29 [initandlisten] connection accepted from 10.255.119.66:44209 #2 (2 connections now open)
m30999| Thu Jun 14 01:23:29 [conn] going to add shard: { _id: "shard0001", host: "domU-12-31-39-01-70-B4:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-0'
Thu Jun 14 01:23:29 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:23:29
m31100| Thu Jun 14 01:23:29 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:23:29
m31100| Thu Jun 14 01:23:29 [initandlisten] MongoDB starting : pid=20980 port=31100 dbpath=/data/db/addshard4-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:23:29 [initandlisten]
m31100| Thu Jun 14 01:23:29 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:23:29 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:23:29 [initandlisten]
m31100| Thu Jun 14 01:23:29 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:23:29 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:23:29 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:23:29 [initandlisten]
m31100| Thu Jun 14 01:23:29 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:23:29 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:23:29 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:23:29 [initandlisten] options: { dbpath: "/data/db/addshard4-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "addshard4", rest: true, smallfiles: true }
m30000| Thu Jun 14 01:23:30 [FileAllocator] done allocating datafile /data/db/addshard40/config.1, size: 32MB, took 0.786 secs
m31100| Thu Jun 14 01:23:30 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:23:30 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:23:30 [initandlisten] connection accepted from 10.255.119.66:47455 #1 (1 connection now open)
m31100| Thu Jun 14 01:23:30 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:23:30 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to localhost:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-1'
m31100| Thu Jun 14 01:23:30 [initandlisten] connection accepted from 127.0.0.1:60266 #2 (2 connections now open)
Thu Jun 14 01:23:30 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:23:30
m31101| Thu Jun 14 01:23:30 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:23:30
m31101| Thu Jun 14 01:23:30 [initandlisten] MongoDB starting : pid=20999 port=31101 dbpath=/data/db/addshard4-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:23:30 [initandlisten]
m31101| Thu Jun 14 01:23:30 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:23:30 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:23:30 [initandlisten]
m31101| Thu Jun 14 01:23:30 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:23:30 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:23:30 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:23:30 [initandlisten]
m31101| Thu Jun 14 01:23:30 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:23:30 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:23:30 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:23:30 [initandlisten] options: { dbpath: "/data/db/addshard4-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "addshard4", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:23:30 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:23:30 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:23:30 [initandlisten] connection accepted from 10.255.119.66:40213 #1 (1 connection now open)
m31101| Thu Jun 14 01:23:30 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:23:30 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to localhost:31100, connection to localhost:31101 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-2'
m31101| Thu Jun 14 01:23:30 [initandlisten] connection accepted from 127.0.0.1:48177 #2 (2 connections now open)
Thu Jun 14 01:23:30 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:23:30
m31102| Thu Jun 14 01:23:30 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:23:30
m31102| Thu Jun 14 01:23:30 [initandlisten] MongoDB starting : pid=21016 port=31102 dbpath=/data/db/addshard4-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:23:30 [initandlisten]
m31102| Thu Jun 14 01:23:30 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:23:30 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:23:30 [initandlisten]
m31102| Thu Jun 14 01:23:30 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:23:30 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:23:30 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:23:30 [initandlisten]
m31102| Thu Jun 14 01:23:30 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:23:30 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:23:30 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:23:30 [initandlisten] options: { dbpath: "/data/db/addshard4-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "addshard4", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:23:30 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:23:30 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:23:30 [initandlisten] connection accepted from 10.255.119.66:45691 #1 (1 connection now open)
m31102| Thu Jun 14 01:23:30 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:23:30 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to localhost:31100,
connection to localhost:31101,
connection to localhost:31102
]
{
"replSetInitiate" : {
"_id" : "addshard4",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102",
"priority" : 0
}
]
}
}
m31102| Thu Jun 14 01:23:31 [initandlisten] connection accepted from 127.0.0.1:38468 #2 (2 connections now open)
m31100| Thu Jun 14 01:23:31 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:23:31 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:23:31 [initandlisten] connection accepted from 10.255.119.66:40218 #3 (3 connections now open)
m31102| Thu Jun 14 01:23:31 [initandlisten] connection accepted from 10.255.119.66:45694 #3 (3 connections now open)
m31100| Thu Jun 14 01:23:31 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:23:31 [conn2] ******
m31100| Thu Jun 14 01:23:31 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:23:31 [FileAllocator] allocating new datafile /data/db/addshard4-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:23:31 [FileAllocator] creating directory /data/db/addshard4-0/_tmp
m31100| Thu Jun 14 01:23:31 [FileAllocator] done allocating datafile /data/db/addshard4-0/local.ns, size: 16MB, took 0.239 secs
m31100| Thu Jun 14 01:23:31 [FileAllocator] allocating new datafile /data/db/addshard4-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:23:32 [FileAllocator] done allocating datafile /data/db/addshard4-0/local.0, size: 64MB, took 1.119 secs
m31100| Thu Jun 14 01:23:32 [conn2] ******
m31100| Thu Jun 14 01:23:32 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:23:32 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:23:32 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:23:32 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "addshard4", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102", priority: 0.0 } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1399800 w:34 reslen:112 1400ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Thu Jun 14 01:23:39 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd9755be6c4b45cdb309e2d
m30999| Thu Jun 14 01:23:39 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31100| Thu Jun 14 01:23:40 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:23:40 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:23:40 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:23:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:23:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:23:40 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31100| Thu Jun 14 01:23:40 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31101| Thu Jun 14 01:23:40 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:23:40 [initandlisten] connection accepted from 10.255.119.66:47466 #3 (3 connections now open)
m31101| Thu Jun 14 01:23:40 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:23:40 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:23:40 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:23:40 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:23:40 [FileAllocator] creating directory /data/db/addshard4-1/_tmp
m31102| Thu Jun 14 01:23:40 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:23:40 [initandlisten] connection accepted from 10.255.119.66:47467 #4 (4 connections now open)
m31102| Thu Jun 14 01:23:40 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:23:40 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:23:40 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:23:40 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:23:40 [FileAllocator] creating directory /data/db/addshard4-2/_tmp
m31101| Thu Jun 14 01:23:40 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.ns, size: 16MB, took 0.272 secs
m31101| Thu Jun 14 01:23:41 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.0, filling with zeroes...
m31102| Thu Jun 14 01:23:41 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.ns, size: 16MB, took 0.616 secs
m31101| Thu Jun 14 01:23:41 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.0, size: 16MB, took 0.607 secs
m31101| Thu Jun 14 01:23:41 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:23:41 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:23:41 [rsSync] ******
m31101| Thu Jun 14 01:23:41 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:23:41 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.1, filling with zeroes...
m31102| Thu Jun 14 01:23:41 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.0, filling with zeroes...
m31102| Thu Jun 14 01:23:42 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.0, size: 16MB, took 0.481 secs
m31100| Thu Jun 14 01:23:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:23:42 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
m31101| Thu Jun 14 01:23:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:23:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:23:42 [initandlisten] connection accepted from 10.255.119.66:45697 #4 (4 connections now open)
m31101| Thu Jun 14 01:23:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:23:43 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.1, size: 64MB, took 1.46 secs
m31102| Thu Jun 14 01:23:43 [rsStart] replSet saveConfigLocally done
m31102| Thu Jun 14 01:23:43 [rsStart] replSet STARTUP2
m31102| Thu Jun 14 01:23:43 [rsSync] ******
m31102| Thu Jun 14 01:23:43 [rsSync] creating replication oplog of size: 40MB...
m31102| Thu Jun 14 01:23:43 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.1, filling with zeroes...
m31101| Thu Jun 14 01:23:43 [rsSync] ******
m31101| Thu Jun 14 01:23:43 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:23:43 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:23:44 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31102| Thu Jun 14 01:23:44 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.1, size: 64MB, took 1.245 secs
m31102| Thu Jun 14 01:23:44 [rsSync] ******
m31102| Thu Jun 14 01:23:44 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:23:44 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31101| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:23:44 [initandlisten] connection accepted from 10.255.119.66:40223 #4 (4 connections now open)
m31102| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:23:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m30999| Thu Jun 14 01:23:49 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd97565e6c4b45cdb309e2e
m30999| Thu Jun 14 01:23:49 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31100| Thu Jun 14 01:23:50 [rsMgr] replSet info electSelf 0
m31102| Thu Jun 14 01:23:50 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:23:50 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31101| Thu Jun 14 01:23:50 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:23:50 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:23:50 [rsMgr] replSet PRIMARY
ReplSetTest Timestamp(1339651412000, 1)
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31101| Thu Jun 14 01:23:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31101| Thu Jun 14 01:23:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:23:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:23:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:23:52 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31100| Thu Jun 14 01:23:52 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31101| Thu Jun 14 01:23:54 [conn3] end connection 10.255.119.66:40218 (3 connections now open)
m31101| Thu Jun 14 01:23:54 [initandlisten] connection accepted from 10.255.119.66:40224 #5 (4 connections now open)
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31100| Thu Jun 14 01:23:56 [conn3] end connection 10.255.119.66:47466 (3 connections now open)
m31100| Thu Jun 14 01:23:56 [initandlisten] connection accepted from 10.255.119.66:47471 #5 (4 connections now open)
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31100| Thu Jun 14 01:23:58 [conn4] end connection 10.255.119.66:47467 (3 connections now open)
m31100| Thu Jun 14 01:23:58 [initandlisten] connection accepted from 10.255.119.66:47472 #6 (4 connections now open)
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:23:59 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:23:59 [initandlisten] connection accepted from 10.255.119.66:47473 #7 (5 connections now open)
m31101| Thu Jun 14 01:23:59 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:23:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:23:59 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync finishing up
m31101| Thu Jun 14 01:23:59 [rsSync] replSet set minValid=4fd97554:1
m31101| Thu Jun 14 01:23:59 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:23:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:23:59 [conn7] end connection 10.255.119.66:47473 (4 connections now open)
m31101| Thu Jun 14 01:23:59 [rsSync] replSet initial sync done
m30999| Thu Jun 14 01:23:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd9756fe6c4b45cdb309e2f
m30999| Thu Jun 14 01:23:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31101| Thu Jun 14 01:23:59 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:23:59 [initandlisten] connection accepted from 10.255.119.66:47474 #8 (5 connections now open)
m31101| Thu Jun 14 01:24:00 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:24:00 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:24:00 [initandlisten] connection accepted from 10.255.119.66:47475 #9 (6 connections now open)
m31100| Thu Jun 14 01:24:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:24:00 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:24:00 [initandlisten] connection accepted from 10.255.119.66:47476 #10 (7 connections now open)
m31102| Thu Jun 14 01:24:00 [rsSync] build index local.me { _id: 1 }
m31102| Thu Jun 14 01:24:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync drop all databases
m31102| Thu Jun 14 01:24:00 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync clone all databases
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync data copy, starting syncup
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync building indexes
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync query minValid
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync finishing up
m31102| Thu Jun 14 01:24:00 [rsSync] replSet set minValid=4fd97554:1
m31102| Thu Jun 14 01:24:00 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Thu Jun 14 01:24:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:24:00 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:24:00 [conn10] end connection 10.255.119.66:47476 (6 connections now open)
{
"ts" : Timestamp(1339651412000, 1),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31101 is 1339651412000:1 and latest is 1339651412000:1
ReplSetTest await oplog size for connection to localhost:31101 is 1
{
"ts" : Timestamp(1339651412000, 1),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31102 is 1339651412000:1 and latest is 1339651412000:1
ReplSetTest await oplog size for connection to localhost:31102 is 1
ReplSetTest await synced=true
adding shard addshard4/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:24:00 [conn] starting new replica set monitor for replica set addshard4 with seed of foobar:27017
m30999| Thu Jun 14 01:24:00 [conn] getaddrinfo("foobar") failed: Name or service not known
m30999| Thu Jun 14 01:24:00 [conn] error connecting to seed foobar:27017 :: caused by :: 15928 couldn't connect to server foobar:27017
m31102| Thu Jun 14 01:24:00 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31102| Thu Jun 14 01:24:01 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:24:01 [initandlisten] connection accepted from 10.255.119.66:47477 #11 (7 connections now open)
m31100| Thu Jun 14 01:24:01 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Thu Jun 14 01:24:01 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:24:01 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:24:01 [initandlisten] connection accepted from 10.255.119.66:47478 #12 (8 connections now open)
m31102| Thu Jun 14 01:24:01 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:24:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m30999| Thu Jun 14 01:24:02 [conn] warning: No primary detected for set addshard4
m30999| Thu Jun 14 01:24:02 [conn] All nodes for set addshard4 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
m30999| Thu Jun 14 01:24:02 [conn] replica set monitor for replica set addshard4 started, address is addshard4/
m30999| Thu Jun 14 01:24:02 [ReplicaSetMonitorWatcher] starting
m31101| Thu Jun 14 01:24:02 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m30999| Thu Jun 14 01:24:04 [conn] warning: No primary detected for set addshard4
m30999| Thu Jun 14 01:24:04 [conn] deleting replica set monitor for: addshard4/
m30999| Thu Jun 14 01:24:04 [conn] addshard request { addshard: "addshard4/foobar" } failed: couldn't connect to new shard socket exception
m30999| Thu Jun 14 01:24:04 [conn] starting new replica set monitor for replica set addshard4 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:47479 #13 (9 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set addshard4
m30999| Thu Jun 14 01:24:04 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101", 2: "domU-12-31-39-01-70-B4:31102" } from addshard4/
m30999| Thu Jun 14 01:24:04 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set addshard4
m31100| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:47480 #14 (10 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set addshard4
m30999| Thu Jun 14 01:24:04 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set addshard4
m31101| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:40235 #6 (5 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set addshard4
m30999| Thu Jun 14 01:24:04 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set addshard4
m31102| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:45711 #5 (5 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set addshard4
m31100| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:47483 #15 (11 connections now open)
m31100| Thu Jun 14 01:24:04 [conn13] end connection 10.255.119.66:47479 (10 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] Primary for replica set addshard4 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:40238 #7 (6 connections now open)
m31102| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:45714 #6 (6 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] replica set monitor for replica set addshard4 started, address is addshard4/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:47486 #16 (11 connections now open)
m30999| Thu Jun 14 01:24:04 [conn] going to add shard: { _id: "addshard4", host: "addshard4/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }
true
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-0'
Thu Jun 14 01:24:04 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Thu Jun 14 01:24:04
m31200| Thu Jun 14 01:24:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Thu Jun 14 01:24:04
m31200| Thu Jun 14 01:24:04 [initandlisten] MongoDB starting : pid=21113 port=31200 dbpath=/data/db/addshard42-0 32-bit host=domU-12-31-39-01-70-B4
m31200| Thu Jun 14 01:24:04 [initandlisten]
m31200| Thu Jun 14 01:24:04 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Thu Jun 14 01:24:04 [initandlisten] ** Not recommended for production.
m31200| Thu Jun 14 01:24:04 [initandlisten]
m31200| Thu Jun 14 01:24:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Thu Jun 14 01:24:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Thu Jun 14 01:24:04 [initandlisten] ** with --journal, the limit is lower
m31200| Thu Jun 14 01:24:04 [initandlisten]
m31200| Thu Jun 14 01:24:04 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Thu Jun 14 01:24:04 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Thu Jun 14 01:24:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31200| Thu Jun 14 01:24:04 [initandlisten] options: { dbpath: "/data/db/addshard42-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "addshard42", rest: true, smallfiles: true }
m31200| Thu Jun 14 01:24:04 [initandlisten] waiting for connections on port 31200
m31200| Thu Jun 14 01:24:04 [websvr] admin web console waiting for connections on port 32200
m31200| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:51569 #1 (1 connection now open)
m31200| Thu Jun 14 01:24:04 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Thu Jun 14 01:24:04 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31200| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 127.0.0.1:48484 #2 (2 connections now open)
[ connection to localhost:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-1'
Thu Jun 14 01:24:04 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Thu Jun 14 01:24:04
m31201| Thu Jun 14 01:24:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Thu Jun 14 01:24:04
m31201| Thu Jun 14 01:24:04 [initandlisten] MongoDB starting : pid=21129 port=31201 dbpath=/data/db/addshard42-1 32-bit host=domU-12-31-39-01-70-B4
m31201| Thu Jun 14 01:24:04 [initandlisten]
m31201| Thu Jun 14 01:24:04 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Thu Jun 14 01:24:04 [initandlisten] ** Not recommended for production.
m31201| Thu Jun 14 01:24:04 [initandlisten]
m31201| Thu Jun 14 01:24:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Thu Jun 14 01:24:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Thu Jun 14 01:24:04 [initandlisten] ** with --journal, the limit is lower
m31201| Thu Jun 14 01:24:04 [initandlisten]
m31201| Thu Jun 14 01:24:04 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Thu Jun 14 01:24:04 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Thu Jun 14 01:24:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31201| Thu Jun 14 01:24:04 [initandlisten] options: { dbpath: "/data/db/addshard42-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "addshard42", rest: true, smallfiles: true }
m31201| Thu Jun 14 01:24:04 [initandlisten] waiting for connections on port 31201
m31201| Thu Jun 14 01:24:04 [websvr] admin web console waiting for connections on port 32201
m31201| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 10.255.119.66:41955 #1 (1 connection now open)
m31201| Thu Jun 14 01:24:04 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Thu Jun 14 01:24:04 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to localhost:31200, connection to localhost:31201 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-2'
m31201| Thu Jun 14 01:24:04 [initandlisten] connection accepted from 127.0.0.1:45078 #2 (2 connections now open)
Thu Jun 14 01:24:05 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31202 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Thu Jun 14 01:24:05
m31202| Thu Jun 14 01:24:05 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Thu Jun 14 01:24:05
m31202| Thu Jun 14 01:24:05 [initandlisten] MongoDB starting : pid=21145 port=31202 dbpath=/data/db/addshard42-2 32-bit host=domU-12-31-39-01-70-B4
m31202| Thu Jun 14 01:24:05 [initandlisten]
m31202| Thu Jun 14 01:24:05 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Thu Jun 14 01:24:05 [initandlisten] ** Not recommended for production.
m31202| Thu Jun 14 01:24:05 [initandlisten]
m31202| Thu Jun 14 01:24:05 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Thu Jun 14 01:24:05 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Thu Jun 14 01:24:05 [initandlisten] ** with --journal, the limit is lower
m31202| Thu Jun 14 01:24:05 [initandlisten]
m31202| Thu Jun 14 01:24:05 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Thu Jun 14 01:24:05 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Thu Jun 14 01:24:05 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31202| Thu Jun 14 01:24:05 [initandlisten] options: { dbpath: "/data/db/addshard42-2", noprealloc: true, oplogSize: 40, port: 31202, replSet: "addshard42", rest: true, smallfiles: true }
m31202| Thu Jun 14 01:24:05 [initandlisten] waiting for connections on port 31202
m31202| Thu Jun 14 01:24:05 [websvr] admin web console waiting for connections on port 32202
m31202| Thu Jun 14 01:24:05 [initandlisten] connection accepted from 10.255.119.66:35398 #1 (1 connection now open)
m31202| Thu Jun 14 01:24:05 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Thu Jun 14 01:24:05 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to localhost:31200,
connection to localhost:31201,
connection to localhost:31202
]
{
"replSetInitiate" : {
"_id" : "addshard42",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31200"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31201"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31202",
"arbiterOnly" : true
}
]
}
}
m31202| Thu Jun 14 01:24:05 [initandlisten] connection accepted from 127.0.0.1:46544 #2 (2 connections now open)
m31200| Thu Jun 14 01:24:05 [conn2] replSet replSetInitiate admin command received from client
m31200| Thu Jun 14 01:24:05 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Thu Jun 14 01:24:05 [initandlisten] connection accepted from 10.255.119.66:41960 #3 (3 connections now open)
m31202| Thu Jun 14 01:24:05 [initandlisten] connection accepted from 10.255.119.66:35401 #3 (3 connections now open)
m31200| Thu Jun 14 01:24:05 [conn2] replSet replSetInitiate all members seem up
m31200| Thu Jun 14 01:24:05 [conn2] ******
m31200| Thu Jun 14 01:24:05 [conn2] creating replication oplog of size: 40MB...
m31200| Thu Jun 14 01:24:05 [FileAllocator] allocating new datafile /data/db/addshard42-0/local.ns, filling with zeroes...
m31200| Thu Jun 14 01:24:05 [FileAllocator] creating directory /data/db/addshard42-0/_tmp
m31200| Thu Jun 14 01:24:05 [FileAllocator] done allocating datafile /data/db/addshard42-0/local.ns, size: 16MB, took 0.232 secs
m31200| Thu Jun 14 01:24:05 [FileAllocator] allocating new datafile /data/db/addshard42-0/local.0, filling with zeroes...
m31200| Thu Jun 14 01:24:06 [FileAllocator] done allocating datafile /data/db/addshard42-0/local.0, size: 64MB, took 1.332 secs
m31200| Thu Jun 14 01:24:06 [conn2] ******
m31200| Thu Jun 14 01:24:06 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Thu Jun 14 01:24:06 [conn2] replSet saveConfigLocally done
m31200| Thu Jun 14 01:24:06 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Thu Jun 14 01:24:06 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "addshard42", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31202", arbiterOnly: true } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1636868 w:35 reslen:112 1637ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31102| Thu Jun 14 01:24:08 [conn3] end connection 10.255.119.66:45694 (5 connections now open)
m31102| Thu Jun 14 01:24:08 [initandlisten] connection accepted from 10.255.119.66:45727 #7 (6 connections now open)
m30999| Thu Jun 14 01:24:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd97579e6c4b45cdb309e30
m30999| Thu Jun 14 01:24:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31102| Thu Jun 14 01:24:10 [conn4] end connection 10.255.119.66:45697 (5 connections now open)
m31102| Thu Jun 14 01:24:10 [initandlisten] connection accepted from 10.255.119.66:45728 #8 (6 connections now open)
m31101| Thu Jun 14 01:24:12 [conn4] end connection 10.255.119.66:40223 (5 connections now open)
m31101| Thu Jun 14 01:24:12 [initandlisten] connection accepted from 10.255.119.66:40254 #8 (6 connections now open)
m31200| Thu Jun 14 01:24:14 [rsStart] replSet I am domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:24:14 [rsStart] replSet STARTUP2
m31200| Thu Jun 14 01:24:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31200| Thu Jun 14 01:24:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31200| Thu Jun 14 01:24:14 [rsSync] replSet SECONDARY
m31201| Thu Jun 14 01:24:14 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:24:14 [initandlisten] connection accepted from 10.255.119.66:51582 #3 (3 connections now open)
m31201| Thu Jun 14 01:24:14 [rsStart] replSet I am domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:24:14 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Thu Jun 14 01:24:14 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Thu Jun 14 01:24:14 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.ns, filling with zeroes...
m31201| Thu Jun 14 01:24:14 [FileAllocator] creating directory /data/db/addshard42-1/_tmp
m31202| Thu Jun 14 01:24:15 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:24:15 [initandlisten] connection accepted from 10.255.119.66:51583 #4 (4 connections now open)
m31202| Thu Jun 14 01:24:15 [rsStart] replSet I am domU-12-31-39-01-70-B4:31202
m31202| Thu Jun 14 01:24:15 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Thu Jun 14 01:24:15 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Thu Jun 14 01:24:15 [FileAllocator] allocating new datafile /data/db/addshard42-2/local.ns, filling with zeroes...
m31202| Thu Jun 14 01:24:15 [FileAllocator] creating directory /data/db/addshard42-2/_tmp
m31201| Thu Jun 14 01:24:15 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.ns, size: 16MB, took 0.225 secs
m31201| Thu Jun 14 01:24:15 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.0, filling with zeroes...
m31202| Thu Jun 14 01:24:15 [FileAllocator] done allocating datafile /data/db/addshard42-2/local.ns, size: 16MB, took 0.638 secs
m31201| Thu Jun 14 01:24:15 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.0, size: 16MB, took 0.629 secs
m31201| Thu Jun 14 01:24:15 [rsStart] replSet saveConfigLocally done
m31201| Thu Jun 14 01:24:15 [rsStart] replSet STARTUP2
m31201| Thu Jun 14 01:24:15 [rsSync] ******
m31201| Thu Jun 14 01:24:15 [rsSync] creating replication oplog of size: 40MB...
m31201| Thu Jun 14 01:24:15 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.1, filling with zeroes...
m31202| Thu Jun 14 01:24:15 [FileAllocator] allocating new datafile /data/db/addshard42-2/local.0, filling with zeroes...
m31202| Thu Jun 14 01:24:16 [FileAllocator] done allocating datafile /data/db/addshard42-2/local.0, size: 16MB, took 0.42 secs
m31200| Thu Jun 14 01:24:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m31200| Thu Jun 14 01:24:16 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31201 would veto
m31201| Thu Jun 14 01:24:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31201| Thu Jun 14 01:24:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31202| Thu Jun 14 01:24:16 [initandlisten] connection accepted from 10.255.119.66:35407 #4 (4 connections now open)
m31201| Thu Jun 14 01:24:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31202| Thu Jun 14 01:24:17 [rsStart] replSet saveConfigLocally done
m31202| Thu Jun 14 01:24:17 [rsStart] replSet STARTUP2
m31201| Thu Jun 14 01:24:17 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.1, size: 64MB, took 1.543 secs
m31201| Thu Jun 14 01:24:17 [rsSync] ******
m31201| Thu Jun 14 01:24:17 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:24:17 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31200| Thu Jun 14 01:24:18 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31200| Thu Jun 14 01:24:18 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31202 would veto
m31201| Thu Jun 14 01:24:18 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31202| Thu Jun 14 01:24:19 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31202| Thu Jun 14 01:24:19 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31201| Thu Jun 14 01:24:19 [initandlisten] connection accepted from 10.255.119.66:41968 #4 (4 connections now open)
m31202| Thu Jun 14 01:24:19 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31202| Thu Jun 14 01:24:19 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m30999| Thu Jun 14 01:24:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd97583e6c4b45cdb309e31
m30999| Thu Jun 14 01:24:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31101| Thu Jun 14 01:24:24 [conn5] end connection 10.255.119.66:40224 (5 connections now open)
m31101| Thu Jun 14 01:24:24 [initandlisten] connection accepted from 10.255.119.66:40259 #9 (6 connections now open)
m31200| Thu Jun 14 01:24:24 [rsMgr] replSet info electSelf 0
m31202| Thu Jun 14 01:24:24 [conn3] replSet RECOVERING
m31202| Thu Jun 14 01:24:24 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31201| Thu Jun 14 01:24:24 [conn3] replSet RECOVERING
m31201| Thu Jun 14 01:24:24 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31200| Thu Jun 14 01:24:24 [rsMgr] replSet PRIMARY
m31201| Thu Jun 14 01:24:24 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31201| Thu Jun 14 01:24:24 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state RECOVERING
ReplSetTest Timestamp(1339651446000, 1)
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
ReplSetTest waiting for connection to localhost:31202 to have an oplog built.
m31202| Thu Jun 14 01:24:25 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
m31202| Thu Jun 14 01:24:25 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31200| Thu Jun 14 01:24:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state ARBITER
m31200| Thu Jun 14 01:24:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
m31100| Thu Jun 14 01:24:26 [conn5] end connection 10.255.119.66:47471 (10 connections now open)
m31100| Thu Jun 14 01:24:26 [initandlisten] connection accepted from 10.255.119.66:47506 #17 (11 connections now open)
m31201| Thu Jun 14 01:24:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state ARBITER
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m30000| Thu Jun 14 01:24:28 [clientcursormon] mem (MB) res:33 virt:145 mapped:32
m31201| Thu Jun 14 01:24:28 [conn3] end connection 10.255.119.66:41960 (3 connections now open)
m31201| Thu Jun 14 01:24:28 [initandlisten] connection accepted from 10.255.119.66:41971 #5 (4 connections now open)
m30001| Thu Jun 14 01:24:28 [clientcursormon] mem (MB) res:16 virt:109 mapped:0
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31100| Thu Jun 14 01:24:29 [conn6] end connection 10.255.119.66:47472 (10 connections now open)
m31100| Thu Jun 14 01:24:29 [initandlisten] connection accepted from 10.255.119.66:47508 #18 (11 connections now open)
m30999| Thu Jun 14 01:24:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' acquired, ts : 4fd9758de6c4b45cdb309e32
m30999| Thu Jun 14 01:24:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651409:1804289383' unlocked.
m31100| Thu Jun 14 01:24:30 [clientcursormon] mem (MB) res:33 virt:298 mapped:80
m31101| Thu Jun 14 01:24:30 [clientcursormon] mem (MB) res:33 virt:290 mapped:96
m31200| Thu Jun 14 01:24:30 [conn3] end connection 10.255.119.66:51582 (3 connections now open)
m31200| Thu Jun 14 01:24:30 [initandlisten] connection accepted from 10.255.119.66:51590 #5 (4 connections now open)
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31102| Thu Jun 14 01:24:30 [clientcursormon] mem (MB) res:33 virt:290 mapped:96
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31200| Thu Jun 14 01:24:33 [conn4] end connection 10.255.119.66:51583 (3 connections now open)
m31200| Thu Jun 14 01:24:33 [initandlisten] connection accepted from 10.255.119.66:51591 #6 (5 connections now open)
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:24:33 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:24:33 [rsSync] build index local.me { _id: 1 }
m31201| Thu Jun 14 01:24:33 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync drop all databases
m31201| Thu Jun 14 01:24:33 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync clone all databases
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync data copy, starting syncup
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync building indexes
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync query minValid
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync finishing up
m31200| Thu Jun 14 01:24:33 [initandlisten] connection accepted from 10.255.119.66:51592 #7 (5 connections now open)
m31201| Thu Jun 14 01:24:33 [rsSync] replSet set minValid=4fd97576:1
m31201| Thu Jun 14 01:24:33 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Thu Jun 14 01:24:33 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:24:33 [rsSync] replSet initial sync done
m31200| Thu Jun 14 01:24:33 [conn7] end connection 10.255.119.66:51592 (4 connections now open)
m31201| Thu Jun 14 01:24:33 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:24:33 [initandlisten] connection accepted from 10.255.119.66:51593 #8 (5 connections now open)
m31201| Thu Jun 14 01:24:34 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:24:34 [rsSync] replSet SECONDARY
m31200| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:51594 #9 (6 connections now open)
m31200| Thu Jun 14 01:24:34 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
{
"ts" : Timestamp(1339651446000, 1),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31201 is 1339651446000:1 and latest is 1339651446000:1
ReplSetTest await oplog size for connection to localhost:31201 is 1
ReplSetTest await synced=true
adding shard addshard42
m30999| Thu Jun 14 01:24:34 [conn] starting new replica set monitor for replica set addshard42 with seed of domU-12-31-39-01-70-B4:31202
m31202| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:35418 #5 (5 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31202 for replica set addshard42
m30999| Thu Jun 14 01:24:34 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31201", 1: "domU-12-31-39-01-70-B4:31200" } from addshard42/
m30999| Thu Jun 14 01:24:34 [conn] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set addshard42
m31200| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:51596 #10 (7 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set addshard42
m30999| Thu Jun 14 01:24:34 [conn] trying to add new host domU-12-31-39-01-70-B4:31201 to replica set addshard42
m31201| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:41980 #6 (5 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31201 in replica set addshard42
m31202| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:35421 #6 (6 connections now open)
m31202| Thu Jun 14 01:24:34 [conn5] end connection 10.255.119.66:35418 (5 connections now open)
m31200| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:51599 #11 (8 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] Primary for replica set addshard42 changed to domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:41983 #7 (6 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] replica set monitor for replica set addshard42 started, address is addshard42/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201
true
m31200| Thu Jun 14 01:24:34 [initandlisten] connection accepted from 10.255.119.66:51601 #12 (9 connections now open)
m30999| Thu Jun 14 01:24:34 [conn] going to add shard: { _id: "addshard42", host: "addshard42/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201" }
m30000| Thu Jun 14 01:24:34 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:24:34 [interruptThread] now exiting
m30000| Thu Jun 14 01:24:34 dbexit:
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:24:34 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:24:34 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:24:34 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:24:34 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:24:34 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:24:34 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:24:34 dbexit: really exiting now
m31202| Thu Jun 14 01:24:35 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
m31200| Thu Jun 14 01:24:35 [slaveTracking] build index local.slaves { _id: 1 }
m31200| Thu Jun 14 01:24:35 [slaveTracking] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:24:35 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:24:35 [interruptThread] now exiting
m30001| Thu Jun 14 01:24:35 dbexit:
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:24:35 [interruptThread] closing listening socket: 15
m30001| Thu Jun 14 01:24:35 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:24:35 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:24:35 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:24:35 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:24:35 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:24:35 dbexit: really exiting now
m30999| Thu Jun 14 01:24:36 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31100| Thu Jun 14 01:24:36 [conn14] end connection 10.255.119.66:47480 (10 connections now open)
m31102| Thu Jun 14 01:24:36 [conn5] end connection 10.255.119.66:45711 (5 connections now open)
m31100| Thu Jun 14 01:24:36 [conn15] end connection 10.255.119.66:47483 (9 connections now open)
m31102| Thu Jun 14 01:24:36 [conn6] end connection 10.255.119.66:45714 (4 connections now open)
m31100| Thu Jun 14 01:24:36 [conn16] end connection 10.255.119.66:47486 (8 connections now open)
m31201| Thu Jun 14 01:24:36 [conn6] end connection 10.255.119.66:41980 (5 connections now open)
m31202| Thu Jun 14 01:24:36 [conn6] end connection 10.255.119.66:35421 (4 connections now open)
m31201| Thu Jun 14 01:24:36 [conn7] end connection 10.255.119.66:41983 (4 connections now open)
m31101| Thu Jun 14 01:24:36 [conn7] end connection 10.255.119.66:40238 (5 connections now open)
m31101| Thu Jun 14 01:24:36 [conn6] end connection 10.255.119.66:40235 (4 connections now open)
m31200| Thu Jun 14 01:24:36 [conn11] end connection 10.255.119.66:51599 (8 connections now open)
m31200| Thu Jun 14 01:24:36 [conn10] end connection 10.255.119.66:51596 (7 connections now open)
m31200| Thu Jun 14 01:24:36 [conn12] end connection 10.255.119.66:51601 (6 connections now open)
m31100| Thu Jun 14 01:24:37 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:24:37 [interruptThread] now exiting
m31100| Thu Jun 14 01:24:37 dbexit:
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:24:37 [interruptThread] closing listening socket: 23
m31100| Thu Jun 14 01:24:37 [interruptThread] closing listening socket: 24
m31100| Thu Jun 14 01:24:37 [interruptThread] closing listening socket: 25
m31100| Thu Jun 14 01:24:37 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:24:37 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:24:37 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:24:37 dbexit: really exiting now
m31101| Thu Jun 14 01:24:37 [conn9] end connection 10.255.119.66:40259 (3 connections now open)
m31102| Thu Jun 14 01:24:37 [conn7] end connection 10.255.119.66:45727 (3 connections now open)
m31101| Thu Jun 14 01:24:37 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:24:37 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:24:38 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:24:38 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "addshard4", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:24:38 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:24:38 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31101| Thu Jun 14 01:24:38 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:24:38 [interruptThread] now exiting
m31101| Thu Jun 14 01:24:38 dbexit:
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:24:38 [interruptThread] closing listening socket: 26
m31101| Thu Jun 14 01:24:38 [interruptThread] closing listening socket: 27
m31101| Thu Jun 14 01:24:38 [interruptThread] closing listening socket: 28
m31101| Thu Jun 14 01:24:38 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:24:38 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:24:38 [conn8] end connection 10.255.119.66:45728 (2 connections now open)
m31101| Thu Jun 14 01:24:38 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:24:38 [conn1] end connection 10.255.119.66:40213 (2 connections now open)
m31101| Thu Jun 14 01:24:38 dbexit: really exiting now
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31101 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31101 ns: admin.$cmd query: { replSetHeartbeat: "addshard4", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state DOWN
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "addshard4", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:24:39 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31102| Thu Jun 14 01:24:39 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:24:39 [interruptThread] now exiting
m31102| Thu Jun 14 01:24:39 dbexit:
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:24:39 [interruptThread] closing listening socket: 29
m31102| Thu Jun 14 01:24:39 [interruptThread] closing listening socket: 30
m31102| Thu Jun 14 01:24:39 [interruptThread] closing listening socket: 31
m31102| Thu Jun 14 01:24:39 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:24:39 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:24:39 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:24:39 [conn1] end connection 10.255.119.66:45691 (1 connection now open)
m31102| Thu Jun 14 01:24:39 dbexit: really exiting now
m31200| Thu Jun 14 01:24:40 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Thu Jun 14 01:24:40 [interruptThread] now exiting
m31200| Thu Jun 14 01:24:40 dbexit:
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: going to close listening sockets...
m31200| Thu Jun 14 01:24:40 [interruptThread] closing listening socket: 31
m31200| Thu Jun 14 01:24:40 [interruptThread] closing listening socket: 32
m31200| Thu Jun 14 01:24:40 [interruptThread] closing listening socket: 34
m31200| Thu Jun 14 01:24:40 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: going to flush diaglog...
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: going to close sockets...
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: closing all files...
m31200| Thu Jun 14 01:24:40 [interruptThread] closeAllFiles() finished
m31200| Thu Jun 14 01:24:40 [interruptThread] shutdown: removing fs lock...
m31201| Thu Jun 14 01:24:40 [conn5] end connection 10.255.119.66:41971 (3 connections now open)
m31202| Thu Jun 14 01:24:40 [conn3] end connection 10.255.119.66:35401 (3 connections now open)
m31201| Thu Jun 14 01:24:40 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:24:40 dbexit: really exiting now
m31202| Thu Jun 14 01:24:41 [rsHealthPoll] DBClientCursor::init call() failed
m31202| Thu Jun 14 01:24:41 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31200 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31200 ns: admin.$cmd query: { replSetHeartbeat: "addshard42", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31202" }
m31202| Thu Jun 14 01:24:41 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state DOWN
m31201| Thu Jun 14 01:24:41 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Thu Jun 14 01:24:41 [interruptThread] now exiting
m31201| Thu Jun 14 01:24:41 dbexit:
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: going to close listening sockets...
m31201| Thu Jun 14 01:24:41 [interruptThread] closing listening socket: 34
m31201| Thu Jun 14 01:24:41 [interruptThread] closing listening socket: 35
m31201| Thu Jun 14 01:24:41 [interruptThread] closing listening socket: 37
m31201| Thu Jun 14 01:24:41 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: going to flush diaglog...
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: going to close sockets...
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: closing all files...
m31201| Thu Jun 14 01:24:41 [interruptThread] closeAllFiles() finished
m31202| Thu Jun 14 01:24:41 [conn4] end connection 10.255.119.66:35407 (2 connections now open)
m31201| Thu Jun 14 01:24:41 [interruptThread] shutdown: removing fs lock...
m31201| Thu Jun 14 01:24:41 [conn1] end connection 10.255.119.66:41955 (2 connections now open)
m31201| Thu Jun 14 01:24:41 dbexit: really exiting now
m31202| Thu Jun 14 01:24:42 got signal 15 (Terminated), will terminate after current cmd ends
m31202| Thu Jun 14 01:24:42 [interruptThread] now exiting
m31202| Thu Jun 14 01:24:42 dbexit:
m31202| Thu Jun 14 01:24:42 [interruptThread] shutdown: going to close listening sockets...
m31202| Thu Jun 14 01:24:42 [interruptThread] closing listening socket: 37
m31202| Thu Jun 14 01:24:42 [interruptThread] closing listening socket: 38
m31202| Thu Jun 14 01:24:42 [interruptThread] closing listening socket: 40
m31202| Thu Jun 14 01:24:42 [interruptThread] removing socket file: /tmp/mongodb-31202.sock
m31202| Thu Jun 14 01:24:42 [interruptThread] shutdown: going to flush diaglog...
m31202| Thu Jun 14 01:24:42 [interruptThread] shutdown: going to close sockets...
m31202| Thu Jun 14 01:24:42 [interruptThread] shutdown: waiting for fs preallocator...
m31202| Thu Jun 14 01:24:42 [interruptThread] shutdown: closing all files...
m31202| Thu Jun 14 01:24:42 [interr 75576.745033ms
Thu Jun 14 01:24:43 [initandlisten] connection accepted from 127.0.0.1:42257 #5 (4 connections now open)
*******************************************
Test : addshard5.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard5.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/addshard5.js";TestData.testFile = "addshard5.js";TestData.testName = "addshard5";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:24:43 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:24:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:24:44
m30000| Thu Jun 14 01:24:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:24:44
m30000| Thu Jun 14 01:24:44 [initandlisten] MongoDB starting : pid=21248 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:24:44 [initandlisten]
m30000| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:24:44 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:24:44 [initandlisten]
m30000| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:24:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:24:44 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:24:44 [initandlisten]
m30000| Thu Jun 14 01:24:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:24:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:24:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:24:44 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:24:44 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:24:44 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:53844 #1 (1 connection now open)
Resetting db path '/data/db/test1'
Thu Jun 14 01:24:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30001| Thu Jun 14 01:24:44
m30001| Thu Jun 14 01:24:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:24:44
m30001| Thu Jun 14 01:24:44 [initandlisten] MongoDB starting : pid=21261 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:24:44 [initandlisten]
m30001| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:24:44 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:24:44 [initandlisten]
m30001| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:24:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:24:44 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:24:44 [initandlisten]
m30001| Thu Jun 14 01:24:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:24:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:24:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:24:44 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:24:44 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:24:44 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test2'
m30001| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:52322 #1 (1 connection now open)
Thu Jun 14 01:24:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/test2
m30002| Thu Jun 14 01:24:44
m30002| Thu Jun 14 01:24:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:24:44
m30002| Thu Jun 14 01:24:44 [initandlisten] MongoDB starting : pid=21274 port=30002 dbpath=/data/db/test2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:24:44 [initandlisten]
m30002| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:24:44 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:24:44 [initandlisten]
m30002| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:24:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:24:44 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:24:44 [initandlisten]
m30002| Thu Jun 14 01:24:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:24:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:24:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:24:44 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
m30002| Thu Jun 14 01:24:44 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:24:44 [websvr] admin web console waiting for connections on port 31002
Resetting db path '/data/db/test-config0'
m30002| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:59679 #1 (1 connection now open)
Thu Jun 14 01:24:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m29000| Thu Jun 14 01:24:44
m29000| Thu Jun 14 01:24:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:24:44
m29000| Thu Jun 14 01:24:44 [initandlisten] MongoDB starting : pid=21287 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:24:44 [initandlisten]
m29000| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:24:44 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:24:44 [initandlisten]
m29000| Thu Jun 14 01:24:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:24:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:24:44 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:24:44 [initandlisten]
m29000| Thu Jun 14 01:24:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:24:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:24:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:24:44 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:24:44 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:24:44 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:24:44 [websvr] ERROR: addr already in use
"localhost:29000"
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:24:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000
m29000| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:46213 #1 (1 connection now open)
m29000| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:46214 #2 (2 connections now open)
m29000| Thu Jun 14 01:24:44 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:24:44 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29000| Thu Jun 14 01:24:44 [initandlisten] connection accepted from 127.0.0.1:46216 #3 (3 connections now open)
m30999| Thu Jun 14 01:24:44 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:24:44 [mongosMain] MongoS version 2.1.2-pre- starting: pid=21300 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:24:44 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:24:44 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:24:44 [mongosMain] options: { configdb: "localhost:29000", port: 30999 }
m29000| Thu Jun 14 01:24:45 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.26 secs
m29000| Thu Jun 14 01:24:45 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:24:45 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.282 secs
m29000| Thu Jun 14 01:24:45 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:24:45 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn2] insert config.settings keyUpdates:0 locks(micros) w:553881 553ms
m29000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:46219 #4 (4 connections now open)
m29000| Thu Jun 14 01:24:45 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:46220 #5 (5 connections now open)
m30999| Thu Jun 14 01:24:45 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:24:45 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:24:45 [Balancer] about to contact config servers and shards
m29000| Thu Jun 14 01:24:45 [conn5] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn5] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:24:45 [conn5] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn5] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn5] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:24:45 [conn5] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:24:45 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:45 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:24:45 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:24:45
m30999| Thu Jun 14 01:24:45 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:24:45 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:46221 #6 (6 connections now open)
m29000| Thu Jun 14 01:24:45 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651485:1804289383' acquired, ts : 4fd9759db841ec3dd47409da
m30999| Thu Jun 14 01:24:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651485:1804289383' unlocked.
m30999| Thu Jun 14 01:24:45 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339651485:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:24:45 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:45 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:24:45 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:24:45 [mongosMain] connection accepted from 127.0.0.1:52709 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:24:45 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:24:45 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:45 [conn] put [admin] on: config:localhost:29000
m30000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:53860 #2 (2 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:52337 #2 (2 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:59693 #2 (2 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30999| Thu Jun 14 01:24:45 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9759db841ec3dd47409d9
m30000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:53863 #3 (3 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9759db841ec3dd47409d9
m30001| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:52340 #3 (3 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd9759db841ec3dd47409d9
m30002| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:59696 #3 (3 connections now open)
m29000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:46229 #7 (7 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd9759db841ec3dd47409d9
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:53867 #4 (4 connections now open)
m30001| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:52344 #4 (4 connections now open)
m30002| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:59700 #4 (4 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] going to start draining shard: shard0002
m30999| primaryLocalDoc: { _id: "local", primary: "shard0002" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0002",
"ok" : 1
}
m30999| Thu Jun 14 01:24:45 [conn] going to remove shard: shard0002
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0002",
"ok" : 1
}
m30999| Thu Jun 14 01:24:45 [conn] couldn't find database [foo] in config db
m30000| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:53870 #5 (5 connections now open)
m30001| Thu Jun 14 01:24:45 [initandlisten] connection accepted from 127.0.0.1:52347 #5 (5 connections now open)
m30999| Thu Jun 14 01:24:45 [conn] put [foo] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:24:45 [conn] enabling sharding on: foo
{ "ok" : 1 }
{ "ok" : 0, "errmsg" : "it is already the primary" }
m30999| Thu Jun 14 01:24:45 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:24:45 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:24:45 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd9759db841ec3dd47409db
m30000| Thu Jun 14 01:24:45 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:24:45 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Thu Jun 14 01:24:45 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd9759db841ec3dd47409db based on: (empty)
m29000| Thu Jun 14 01:24:45 [conn3] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:24:45 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:24:46 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.662 secs
m30000| Thu Jun 14 01:24:46 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.406 secs
m30000| Thu Jun 14 01:24:46 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:24:46 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.276 secs
m30000| Thu Jun 14 01:24:46 [conn5] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:24:46 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:46 [conn5] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:24:46 [conn5] insert foo.system.indexes keyUpdates:0 locks(micros) W:88 r:300 w:1255600 1255ms
m30000| Thu Jun 14 01:24:46 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9759db841ec3dd47409db'), serverID: ObjectId('4fd9759db841ec3dd47409d9'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:96 reslen:171 1254ms
m30000| Thu Jun 14 01:24:46 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:24:46 [conn3] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:24:46 [initandlisten] connection accepted from 127.0.0.1:46235 #8 (8 connections now open)
m30999| Thu Jun 14 01:24:46 [conn] resetting shard version of foo.bar on localhost:30001, version is zero
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30999| Thu Jun 14 01:24:46 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:24:46 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m29000| Thu Jun 14 01:24:46 [initandlisten] connection accepted from 127.0.0.1:46236 #9 (9 connections now open)
m30000| Thu Jun 14 01:24:46 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:24:46 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:24:46 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30000:1339651486:592189111 (sleeping for 30000ms)
m30000| Thu Jun 14 01:24:46 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651486:592189111' acquired, ts : 4fd9759e4b47a9c859d26a9d
m30000| Thu Jun 14 01:24:46 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:46-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651486807), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:24:46 [conn5] moveChunk request accepted at version 1|0||4fd9759db841ec3dd47409db
m30000| Thu Jun 14 01:24:46 [conn5] moveChunk number of documents: 1
m30000| Thu Jun 14 01:24:46 [initandlisten] connection accepted from 127.0.0.1:53875 #6 (6 connections now open)
m30001| Thu Jun 14 01:24:46 [initandlisten] connection accepted from 127.0.0.1:52350 #6 (6 connections now open)
m30001| Thu Jun 14 01:24:46 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:24:46 [FileAllocator] creating directory /data/db/test1/_tmp
m30000| Thu Jun 14 01:24:47 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.582 secs
m30001| Thu Jun 14 01:24:47 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.274 secs
m30001| Thu Jun 14 01:24:47 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:24:47 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:24:47 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.275 secs
m30001| Thu Jun 14 01:24:47 [migrateThread] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:24:47 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:24:47 [migrateThread] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:24:47 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Thu Jun 14 01:24:47 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:24:48 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.602 secs
m30000| Thu Jun 14 01:24:48 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:24:48 [conn5] moveChunk setting version to: 2|0||4fd9759db841ec3dd47409db
m30001| Thu Jun 14 01:24:48 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Thu Jun 14 01:24:48 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:48-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651488818), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 1132, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 876 } }
m29000| Thu Jun 14 01:24:48 [initandlisten] connection accepted from 127.0.0.1:46239 #10 (10 connections now open)
m30000| Thu Jun 14 01:24:48 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:24:48 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m30000| Thu Jun 14 01:24:48 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:48-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651488823), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:24:48 [conn5] doing delete inline
m30000| Thu Jun 14 01:24:48 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:24:48 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651486:592189111' unlocked.
m30000| Thu Jun 14 01:24:48 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:48-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651488824), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 2005, step5 of 6: 8, step6 of 6: 0 } }
m30000| Thu Jun 14 01:24:48 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:88 r:465 w:1256132 reslen:37 2019ms
{ "millis" : 2020, "ok" : 1 }
m30999| Thu Jun 14 01:24:48 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 2|0||4fd9759db841ec3dd47409db based on: 1|0||4fd9759db841ec3dd47409db
m30999| Thu Jun 14 01:24:48 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:24:48 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 2|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m29000| Thu Jun 14 01:24:48 [initandlisten] connection accepted from 127.0.0.1:46240 #11 (11 connections now open)
m30001| Thu Jun 14 01:24:48 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:24:48 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:24:48 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30001:1339651488:140449212 (sleeping for 30000ms)
m30001| Thu Jun 14 01:24:48 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651488:140449212' acquired, ts : 4fd975a0b3dfc366fcf3aabc
m30001| Thu Jun 14 01:24:48 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:48-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52347", time: new Date(1339651488829), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:24:48 [conn5] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:24:48 [conn5] moveChunk request accepted at version 2|0||4fd9759db841ec3dd47409db
m30001| Thu Jun 14 01:24:48 [conn5] moveChunk number of documents: 1
m30000| Thu Jun 14 01:24:48 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Thu Jun 14 01:24:49 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:24:49 [conn5] moveChunk setting version to: 3|0||4fd9759db841ec3dd47409db
m30000| Thu Jun 14 01:24:49 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30000| Thu Jun 14 01:24:49 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:49-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651489838), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1007 } }
m30001| Thu Jun 14 01:24:49 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:24:49 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m30001| Thu Jun 14 01:24:49 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:49-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52347", time: new Date(1339651489842), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:24:49 [conn5] doing delete inline
m30001| Thu Jun 14 01:24:49 [conn5] moveChunk deleted: 1
m30001| Thu Jun 14 01:24:49 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651488:140449212' unlocked.
m30001| Thu Jun 14 01:24:49 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:49-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52347", time: new Date(1339651489843), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:24:49 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:117 w:241 reslen:37 1017ms
m30999| Thu Jun 14 01:24:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 3|0||4fd9759db841ec3dd47409db based on: 2|0||4fd9759db841ec3dd47409db
{ "millis" : 1018, "ok" : 1 }
m30999| Thu Jun 14 01:24:49 [conn] going to start draining shard: shard0001
m30999| primaryLocalDoc: { _id: "local", primary: "shard0001" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0001",
"ok" : 1
}
m30999| Thu Jun 14 01:24:49 [conn] going to remove shard: shard0001
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0001",
"ok" : 1
}
m30002| Thu Jun 14 01:24:49 [initandlisten] connection accepted from 127.0.0.1:59709 #5 (5 connections now open)
m30999| Thu Jun 14 01:24:49 [conn] going to add shard: { _id: "shard0001", host: "localhost:30002" }
{ "shardAdded" : "shard0001", "ok" : 1 }
----
Shard was dropped and re-added with same name...
----
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0000" }
foo.bar chunks:
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(3000, 0)
m30999| Thu Jun 14 01:24:49 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:24:49 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30002
m30000| Thu Jun 14 01:24:49 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:24:49 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:24:49 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651486:592189111' acquired, ts : 4fd975a14b47a9c859d26a9e
m30000| Thu Jun 14 01:24:49 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:49-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651489879), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:24:49 [conn5] moveChunk request accepted at version 3|0||4fd9759db841ec3dd47409db
m30000| Thu Jun 14 01:24:49 [conn5] moveChunk number of documents: 1
m30002| Thu Jun 14 01:24:49 [initandlisten] connection accepted from 127.0.0.1:59710 #6 (6 connections now open)
m30000| Thu Jun 14 01:24:49 [initandlisten] connection accepted from 127.0.0.1:53880 #7 (7 connections now open)
m30002| Thu Jun 14 01:24:49 [FileAllocator] allocating new datafile /data/db/test2/foo.ns, filling with zeroes...
m30002| Thu Jun 14 01:24:49 [FileAllocator] creating directory /data/db/test2/_tmp
m30002| Thu Jun 14 01:24:50 [FileAllocator] done allocating datafile /data/db/test2/foo.ns, size: 16MB, took 0.353 secs
m30002| Thu Jun 14 01:24:50 [FileAllocator] allocating new datafile /data/db/test2/foo.0, filling with zeroes...
m30002| Thu Jun 14 01:24:50 [FileAllocator] done allocating datafile /data/db/test2/foo.0, size: 16MB, took 0.345 secs
m30002| Thu Jun 14 01:24:50 [FileAllocator] allocating new datafile /data/db/test2/foo.1, filling with zeroes...
m30002| Thu Jun 14 01:24:50 [migrateThread] build index foo.bar { _id: 1 }
m30002| Thu Jun 14 01:24:50 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:24:50 [migrateThread] info: creating collection foo.bar on add index
m30002| Thu Jun 14 01:24:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30000| Thu Jun 14 01:24:50 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:24:50 [conn5] moveChunk setting version to: 4|0||4fd9759db841ec3dd47409db
m30002| Thu Jun 14 01:24:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30002| Thu Jun 14 01:24:50 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:50-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651490894), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 711, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 300 } }
m29000| Thu Jun 14 01:24:50 [initandlisten] connection accepted from 127.0.0.1:46244 #12 (12 connections now open)
m30000| Thu Jun 14 01:24:50 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:24:50 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m30000| Thu Jun 14 01:24:50 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:50-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651490899), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:24:50 [conn5] doing delete inline
m30000| Thu Jun 14 01:24:50 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:24:50 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651486:592189111' unlocked.
m30000| Thu Jun 14 01:24:50 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:50-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:53870", time: new Date(1339651490900), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 16, step6 of 6: 0 } }
m30000| Thu Jun 14 01:24:50 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:88 r:547 w:1256304 reslen:37 1021ms
m30999| Thu Jun 14 01:24:50 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 4|0||4fd9759db841ec3dd47409db based on: 3|0||4fd9759db841ec3dd47409db
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:24:50 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:24:50 [conn3] end connection 127.0.0.1:52340 (5 connections now open)
m29000| Thu Jun 14 01:24:50 [conn3] end connection 127.0.0.1:46216 (11 connections now open)
m29000| Thu Jun 14 01:24:50 [conn4] end connection 127.0.0.1:46219 (10 connections now open)
m29000| Thu Jun 14 01:24:50 [conn7] end connection 127.0.0.1:46229 (9 connections now open)
m29000| Thu Jun 14 01:24:50 [conn6] end connection 127.0.0.1:46221 (9 connections now open)
m30000| Thu Jun 14 01:24:50 [conn3] warning: DBException thrown :: caused by :: 9001 socket exception
m30000| Thu Jun 14 01:24:50 [conn5] warning: DBException thrown :: caused by :: 9001 socket exception
m30002| Thu Jun 14 01:24:50 [conn3] warning: DBException thrown :: caused by :: 9001 socket exception
m30002| 0x8800c8a 0x874d22b 0x876e51a 0x8437b88 0x87d2bf6 0x632542 0x1e5b6e
m30002| Thu Jun 14 01:24:50 [conn5] warning: DBException thrown :: caused by :: 9001 socket exception
m29000| Thu Jun 14 01:24:50 [conn5] end connection 127.0.0.1:46220 (7 connections now open)
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x874d22b]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x2aa) [0x876e51a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xa8) [0x8437b88]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2a6) [0x87d2bf6]
m30002| /lib/i686/nosegneg/libpthread.so.0 [0x632542]
m30002| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x1e5b6e]
m30002| 0x8800c8a 0x874d22b 0x876e51a 0x8437b88 0x87d2bf6 0x632542 0x1e5b6e
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x874d22b]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x2aa) [0x876e51a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xa8) [0x8437b88]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2a6) [0x87d2bf6]
m30002| /lib/i686/nosegneg/libpthread.so.0 [0x632542]
m30002| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x1e5b6e]
m30002| Thu Jun 14 01:24:50 [conn5] end connection 127.0.0.1:59709 (5 connections now open)
m30002| Thu Jun 14 01:24:50 [conn3] end connection 127.0.0.1:59696 (4 connections now open)
m30000| 0x8800c8a 0x874d22b 0x876e51a 0x8437b88 0x87d2bf6 0xd41542 0x38eb6e
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x874d22b]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x2aa) [0x876e51a]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xa8) [0x8437b88]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2a6) [0x87d2bf6]
m30000| /lib/i686/nosegneg/libpthread.so.0 [0xd41542]
m30000| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x38eb6e]
m30000| Thu Jun 14 01:24:50 [conn5] end connection 127.0.0.1:53870 (6 connections now open)
m30000| 0x8800c8a 0x874d22b 0x876e51a 0x8437b88 0x87d2bf6 0xd41542 0x38eb6e
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x874d22b]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x2aa) [0x876e51a]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xa8) [0x8437b88]
m30000| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2a6) [0x87d2bf6]
m30000| /lib/i686/nosegneg/libpthread.so.0 [0xd41542]
m30000| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x38eb6e]
m30001| Thu Jun 14 01:24:50 [conn5] end connection 127.0.0.1:52347 (4 connections now open)
m30000| Thu Jun 14 01:24:50 [conn3] end connection 127.0.0.1:53863 (5 connections now open)
m30002| Thu Jun 14 01:24:51 [FileAllocator] done allocating datafile /data/db/test2/foo.1, size: 32MB, took 0.842 secs
Thu Jun 14 01:24:51 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:24:51 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:24:51 [interruptThread] now exiting
m30000| Thu Jun 14 01:24:51 dbexit:
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:24:51 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:24:51 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:24:51 [interruptThread] closing listening socket: 15
m30000| Thu Jun 14 01:24:51 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:24:51 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:24:51 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:24:51 dbexit: really exiting now
m30002| Thu Jun 14 01:24:51 [conn6] warning: DBException thrown :: caused by :: 9001 socket exception
m30002| 0x8800c8a 0x874d22b 0x876e51a 0x8437b88 0x87d2bf6 0x632542 0x1e5b6e
m29000| Thu Jun 14 01:24:51 [conn8] end connection 127.0.0.1:46235 (6 connections now open)
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x874d22b]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x2aa) [0x876e51a]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xa8) [0x8437b88]
m30002| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2a6) [0x87d2bf6]
m30002| /lib/i686/nosegneg/libpthread.so.0 [0x632542]
m30002| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x1e5b6e]
m30002| Thu Jun 14 01:24:51 [conn6] end connection 127.0.0.1:59710 (3 connections now open)
m29000| Thu Jun 14 01:24:51 [conn9] end connection 127.0.0.1:46236 (5 connections now open)
m30001| Thu Jun 14 01:24:51 [conn6] end connection 127.0.0.1:52350 (3 connections now open)
Thu Jun 14 01:24:52 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:24:52 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:24:52 [interruptThread] now exiting
m30001| Thu Jun 14 01:24:52 dbexit:
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:24:52 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:24:52 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:24:52 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:24:52 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:24:52 [conn11] end connection 127.0.0.1:46240 (4 connections now open)
m29000| Thu Jun 14 01:24:52 [conn10] end connection 127.0.0.1:46239 (3 connections now open)
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:24:52 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:24:52 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:24:52 dbexit: really exiting now
Thu Jun 14 01:24:53 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:24:53 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:24:53 [interruptThread] now exiting
m30002| Thu Jun 14 01:24:53 dbexit:
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:24:53 [interruptThread] closing listening socket: 19
m30002| Thu Jun 14 01:24:53 [interruptThread] closing listening socket: 20
m30002| Thu Jun 14 01:24:53 [interruptThread] closing listening socket: 21
m30002| Thu Jun 14 01:24:53 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:24:53 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:24:53 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:24:53 dbexit: really exiting now
m29000| Thu Jun 14 01:24:53 [conn12] end connection 127.0.0.1:46244 (2 connections now open)
Thu Jun 14 01:24:54 shell: stopped mongo program on port 30002
m29000| Thu Jun 14 01:24:54 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:24:54 [interruptThread] now exiting
m29000| Thu Jun 14 01:24:54 dbexit:
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:24:54 [interruptThread] closing listening socket: 22
m29000| Thu Jun 14 01:24:54 [interruptThread] closing listening socket: 23
m29000| Thu Jun 14 01:24:54 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:24:54 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:24:54 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:24:54 dbexit: really exiting now
Thu Jun 14 01:24:55 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 11.975 seconds ***
12031.105042ms
Thu Jun 14 01:24:55 [initandlisten] connection accepted from 127.0.0.1:42297 #6 (5 connections now open)
*******************************************
Test : array_shard_key.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/array_shard_key.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/array_shard_key.js";TestData.testFile = "array_shard_key.js";TestData.testName = "array_shard_key";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:24:55 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/array_shard_key0'
Thu Jun 14 01:24:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/array_shard_key0
m30000| Thu Jun 14 01:24:56
m30000| Thu Jun 14 01:24:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:24:56
m30000| Thu Jun 14 01:24:56 [initandlisten] MongoDB starting : pid=21382 port=30000 dbpath=/data/db/array_shard_key0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:24:56 [initandlisten]
m30000| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:24:56 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:24:56 [initandlisten]
m30000| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:24:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:24:56 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:24:56 [initandlisten]
m30000| Thu Jun 14 01:24:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:24:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:24:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:24:56 [initandlisten] options: { dbpath: "/data/db/array_shard_key0", port: 30000 }
m30000| Thu Jun 14 01:24:56 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:24:56 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/array_shard_key1'
m30000| Thu Jun 14 01:24:56 [initandlisten] connection accepted from 127.0.0.1:53884 #1 (1 connection now open)
Thu Jun 14 01:24:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/array_shard_key1
m30001| Thu Jun 14 01:24:56
m30001| Thu Jun 14 01:24:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:24:56
m30001| Thu Jun 14 01:24:56 [initandlisten] MongoDB starting : pid=21395 port=30001 dbpath=/data/db/array_shard_key1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:24:56 [initandlisten]
m30001| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:24:56 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:24:56 [initandlisten]
m30001| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:24:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:24:56 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:24:56 [initandlisten]
m30001| Thu Jun 14 01:24:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:24:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:24:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:24:56 [initandlisten] options: { dbpath: "/data/db/array_shard_key1", port: 30001 }
m30001| Thu Jun 14 01:24:56 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:24:56 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/array_shard_key2'
m30001| Thu Jun 14 01:24:56 [initandlisten] connection accepted from 127.0.0.1:52362 #1 (1 connection now open)
Thu Jun 14 01:24:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/array_shard_key2
m30002| Thu Jun 14 01:24:56
m30002| Thu Jun 14 01:24:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:24:56
m30002| Thu Jun 14 01:24:56 [initandlisten] MongoDB starting : pid=21408 port=30002 dbpath=/data/db/array_shard_key2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:24:56 [initandlisten]
m30002| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:24:56 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:24:56 [initandlisten]
m30002| Thu Jun 14 01:24:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:24:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:24:56 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:24:56 [initandlisten]
m30002| Thu Jun 14 01:24:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:24:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:24:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:24:56 [initandlisten] options: { dbpath: "/data/db/array_shard_key2", port: 30002 }
m30002| Thu Jun 14 01:24:56 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:24:56 [websvr] admin web console waiting for connections on port 31002
"localhost:30000"
ShardingTest array_shard_key :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:24:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:24:56 [initandlisten] connection accepted from 127.0.0.1:53889 #2 (2 connections now open)
m30002| Thu Jun 14 01:24:56 [initandlisten] connection accepted from 127.0.0.1:59719 #1 (1 connection now open)
m30999| Thu Jun 14 01:24:56 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:24:56 [mongosMain] MongoS version 2.1.2-pre- starting: pid=21422 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:24:56 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:24:56 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:24:56 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:24:56 [initandlisten] connection accepted from 127.0.0.1:53890 #3 (3 connections now open)
m30000| Thu Jun 14 01:24:56 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:24:56 [FileAllocator] creating directory /data/db/array_shard_key0/_tmp
m30000| Thu Jun 14 01:24:56 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.ns, size: 16MB, took 0.261 secs
m30000| Thu Jun 14 01:24:56 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:24:57 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.0, size: 16MB, took 0.289 secs
m30000| Thu Jun 14 01:24:57 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:24:57 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn2] insert config.settings keyUpdates:0 locks(micros) w:562211 562ms
m30000| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:53894 #4 (4 connections now open)
m30000| Thu Jun 14 01:24:57 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:24:57 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:24:57 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:57 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:24:57 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:24:57 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:24:57 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:24:57 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:24:57
m30999| Thu Jun 14 01:24:57 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:53895 #5 (5 connections now open)
m30000| Thu Jun 14 01:24:57 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:57 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651497:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:24:57 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:57 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:24:57 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651497:1804289383' acquired, ts : 4fd975a9ba14fb605be38f09
m30999| Thu Jun 14 01:24:57 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651497:1804289383' unlocked.
m30999| Thu Jun 14 01:24:57 [mongosMain] connection accepted from 127.0.0.1:52746 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:24:57 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:24:57 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:24:57 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:24:57 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:52373 #2 (2 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:59729 #2 (2 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30999| Thu Jun 14 01:24:57 [conn] couldn't find database [array_shard_key] in config db
m30999| Thu Jun 14 01:24:57 [conn] put [array_shard_key] on: shard0001:localhost:30001
m30000| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:53899 #6 (6 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd975a9ba14fb605be38f08
m30999| Thu Jun 14 01:24:57 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd975a9ba14fb605be38f08
m30001| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:52376 #3 (3 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd975a9ba14fb605be38f08
m30002| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:59732 #3 (3 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] enabling sharding on: array_shard_key
m30001| Thu Jun 14 01:24:57 [initandlisten] connection accepted from 127.0.0.1:52378 #4 (4 connections now open)
m30999| Thu Jun 14 01:24:57 [conn] CMD: shardcollection: { shardcollection: "array_shard_key.foo", key: { _id: 1.0, i: 1.0 } }
m30999| Thu Jun 14 01:24:57 [conn] enable sharding on: array_shard_key.foo with shard key: { _id: 1.0, i: 1.0 }
m30999| Thu Jun 14 01:24:57 [conn] going to create 1 chunk(s) for: array_shard_key.foo using new epoch 4fd975a9ba14fb605be38f0a
m30999| Thu Jun 14 01:24:57 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 0ms sequenceNumber: 2 version: 1|0||4fd975a9ba14fb605be38f0a based on: (empty)
m30000| Thu Jun 14 01:24:57 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:24:57 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:24:57 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.ns, filling with zeroes...
m30001| Thu Jun 14 01:24:57 [FileAllocator] creating directory /data/db/array_shard_key1/_tmp
m30000| Thu Jun 14 01:24:57 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.1, size: 32MB, took 0.577 secs
m30001| Thu Jun 14 01:24:58 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.ns, size: 16MB, took 0.332 secs
m30001| Thu Jun 14 01:24:58 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.0, filling with zeroes...
m30001| Thu Jun 14 01:24:58 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.0, size: 16MB, took 0.297 secs
m30001| Thu Jun 14 01:24:58 [conn4] build index array_shard_key.foo { _id: 1 }
m30001| Thu Jun 14 01:24:58 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:24:58 [conn4] info: creating collection array_shard_key.foo on add index
m30001| Thu Jun 14 01:24:58 [conn4] build index array_shard_key.foo { _id: 1.0, i: 1.0 }
m30001| Thu Jun 14 01:24:58 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:24:58 [conn4] insert array_shard_key.system.indexes keyUpdates:0 locks(micros) r:234 w:1154959 1154ms
m30001| Thu Jun 14 01:24:58 [conn3] command admin.$cmd command: { setShardVersion: "array_shard_key.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd975a9ba14fb605be38f0a'), serverID: ObjectId('4fd975a9ba14fb605be38f08'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:170 r:24 reslen:195 1153ms
m30001| Thu Jun 14 01:24:58 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.1, filling with zeroes...
m30001| Thu Jun 14 01:24:58 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:24:58 [initandlisten] connection accepted from 127.0.0.1:53903 #7 (7 connections now open)
m30000| Thu Jun 14 01:24:58 [initandlisten] connection accepted from 127.0.0.1:53904 #8 (8 connections now open)
m30001| Thu Jun 14 01:24:58 [conn4] received splitChunk request: { splitChunk: "array_shard_key.foo", keyPattern: { _id: 1.0, i: 1.0 }, min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 } ], shardId: "array_shard_key.foo-_id_MinKeyi_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:24:58 [conn4] created new distributed lock for array_shard_key.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:24:58 [conn4] distributed lock 'array_shard_key.foo/domU-12-31-39-01-70-B4:30001:1339651498:76623404' acquired, ts : 4fd975aa07f86c2d8245d513
m30001| Thu Jun 14 01:24:58 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651498:76623404 (sleeping for 30000ms)
m30001| Thu Jun 14 01:24:58 [conn4] splitChunk accepted at version 1|0||4fd975a9ba14fb605be38f0a
m30001| Thu Jun 14 01:24:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:58-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651498463), what: "split", ns: "array_shard_key.foo", details: { before: { min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey, i: MinKey }, max: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975a9ba14fb605be38f0a') }, right: { min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd975a9ba14fb605be38f0a') } } }
m30000| Thu Jun 14 01:24:58 [initandlisten] connection accepted from 127.0.0.1:53905 #9 (9 connections now open)
m30001| Thu Jun 14 01:24:58 [conn4] distributed lock 'array_shard_key.foo/domU-12-31-39-01-70-B4:30001:1339651498:76623404' unlocked.
m30999| Thu Jun 14 01:24:58 [conn] splitting: array_shard_key.foo shard: ns:array_shard_key.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey, i: MinKey } max: { _id: MaxKey, i: MaxKey }
m30999| Thu Jun 14 01:24:58 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 0ms sequenceNumber: 3 version: 1|2||4fd975a9ba14fb605be38f0a based on: 1|0||4fd975a9ba14fb605be38f0a
m30999| Thu Jun 14 01:24:58 [conn] CMD: movechunk: { movechunk: "array_shard_key.foo", find: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:24:58 [conn] moving chunk ns: array_shard_key.foo moving ( ns:array_shard_key.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 } max: { _id: MaxKey, i: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:24:58 [conn4] received moveChunk request: { moveChunk: "array_shard_key.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo-_id_ObjectId('4fd975a930b40389751f391f')i_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:24:58 [conn4] created new distributed lock for array_shard_key.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:24:58 [conn4] distributed lock 'array_shard_key.foo/domU-12-31-39-01-70-B4:30001:1339651498:76623404' acquired, ts : 4fd975aa07f86c2d8245d514
m30001| Thu Jun 14 01:24:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:24:58-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651498468), what: "moveChunk.start", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:24:58 [conn4] moveChunk request accepted at version 1|2||4fd975a9ba14fb605be38f0a
m30001| Thu Jun 14 01:24:58 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:24:58 [initandlisten] connection accepted from 127.0.0.1:52382 #5 (5 connections now open)
m30000| Thu Jun 14 01:24:58 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.ns, filling with zeroes...
m30001| Thu Jun 14 01:24:59 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.1, size: 32MB, took 0.855 secs
m30000| Thu Jun 14 01:24:59 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.ns, size: 16MB, took 0.885 secs
m30000| Thu Jun 14 01:24:59 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.0, filling with zeroes...
m30001| Thu Jun 14 01:24:59 [conn4] moveChunk data transfer progress: { active: true, ns: "array_shard_key.foo", from: "localhost:30001", min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:24:59 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.0, size: 16MB, took 0.261 secs
m30000| Thu Jun 14 01:24:59 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.1, filling with zeroes...
m30000| Thu Jun 14 01:24:59 [migrateThread] build index array_shard_key.foo { _id: 1 }
m30000| Thu Jun 14 01:24:59 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:59 [migrateThread] info: creating collection array_shard_key.foo on add index
m30000| Thu Jun 14 01:24:59 [migrateThread] build index array_shard_key.foo { _id: 1.0, i: 1.0 }
m30000| Thu Jun 14 01:24:59 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:24:59 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo' { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Thu Jun 14 01:25:00 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.1, size: 32MB, took 0.555 secs
m30001| Thu Jun 14 01:25:00 [conn4] moveChunk data transfer progress: { active: true, ns: "array_shard_key.foo", from: "localhost:30001", min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:25:00 [conn4] moveChunk setting version to: 2|0||4fd975a9ba14fb605be38f0a
m30000| Thu Jun 14 01:25:00 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo' { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Thu Jun 14 01:25:00 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:00-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651500487), what: "moveChunk.to", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 5: 1183, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 834 } }
m30000| Thu Jun 14 01:25:00 [initandlisten] connection accepted from 127.0.0.1:53907 #10 (10 connections now open)
m30001| Thu Jun 14 01:25:00 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "array_shard_key.foo", from: "localhost:30001", min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:25:00 [conn4] moveChunk updating self version to: 2|1||4fd975a9ba14fb605be38f0a through { _id: MinKey, i: MinKey } -> { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 } for collection 'array_shard_key.foo'
m30001| Thu Jun 14 01:25:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:00-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651500491), what: "moveChunk.commit", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:25:00 [conn4] doing delete inline
m30001| Thu Jun 14 01:25:00 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:25:00 [conn4] distributed lock 'array_shard_key.foo/domU-12-31-39-01-70-B4:30001:1339651498:76623404' unlocked.
m30001| Thu Jun 14 01:25:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:00-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651500492), what: "moveChunk.from", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:25:00 [conn4] command admin.$cmd command: { moveChunk: "array_shard_key.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd975a930b40389751f391f'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo-_id_ObjectId('4fd975a930b40389751f391f')i_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:307 w:1155010 reslen:37 2024ms
m30999| Thu Jun 14 01:25:00 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 0ms sequenceNumber: 4 version: 2|1||4fd975a9ba14fb605be38f0a based on: 1|2||4fd975a9ba14fb605be38f0a
{ "millis" : 2025, "ok" : 1 }
[
{
"_id" : "array_shard_key.foo-_id_MinKeyi_MinKey",
"lastmod" : Timestamp(2000, 1),
"lastmodEpoch" : ObjectId("4fd975a9ba14fb605be38f0a"),
"ns" : "array_shard_key.foo",
"min" : {
"_id" : { $minKey : 1 },
"i" : { $minKey : 1 }
},
"max" : {
"_id" : ObjectId("4fd975a930b40389751f391f"),
"i" : 1
},
"shard" : "shard0001"
},
{
"_id" : "array_shard_key.foo-_id_ObjectId('4fd975a930b40389751f391f')i_1.0",
"lastmod" : Timestamp(2000, 0),
"lastmodEpoch" : ObjectId("4fd975a9ba14fb605be38f0a"),
"ns" : "array_shard_key.foo",
"min" : {
"_id" : ObjectId("4fd975a930b40389751f391f"),
"i" : 1
},
"max" : {
"_id" : { $maxKey : 1 },
"i" : { $maxKey : 1 }
},
"shard" : "shard0000"
}
]
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
1: insert some invalid data
m30999| Thu Jun 14 01:25:00 [conn] warning: shard key mismatch for insert { _id: ObjectId('4fd975acba14fb605be38f0b'), _id: ObjectId('4fd975ac30b40389751f3920'), i: [ 1.0, 2.0 ] }, expected values for { _id: 1.0, i: 1.0 }, reloading config data to ensure not stale
m30999| Thu Jun 14 01:25:01 [conn] tried to insert object with no valid shard key for { _id: 1.0, i: 1.0 } : { _id: ObjectId('4fd975acba14fb605be38f0c'), _id: ObjectId('4fd975ac30b40389751f3920'), i: [ 1.0, 2.0 ] }
"tried to insert object with no valid shard key for { _id: 1.0, i: 1.0 } : { _id: ObjectId('4fd975acba14fb605be38f0c'), _id: ObjectId('4fd975ac30b40389751f3920'), i: [ 1.0, 2.0 ] }"
m30000| Thu Jun 14 01:25:01 [conn6] no current chunk manager found for this shard, will initialize
"full shard key must be in update object for collection: array_shard_key.foo"
"multi-updates require $ops rather than replacement object"
"cannot modify shard key for collection array_shard_key.foo, found new value for i"
"Sharding-then-inserting-multikey tested, now trying inserting-then-sharding-multikey"
m30001| Thu Jun 14 01:25:03 [conn3] build index array_shard_key.foo2 { _id: 1 }
m30001| Thu Jun 14 01:25:03 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:25:03 [conn3] build index array_shard_key.foo2 { _id: 1.0, i: 1.0 }
m30001| Thu Jun 14 01:25:03 [conn3] build index done. scanned 10 total records. 0 secs
{ "ok" : 0, "errmsg" : "couldn't find valid index for shard key" }
assert failed
Error("Printing Stack Trace")@:0
()@src/mongo/shell/utils.js:37
("assert failed")@src/mongo/shell/utils.js:58
(false)@src/mongo/shell/utils.js:66
([object DBCollection],[object Object],[object Object])@src/mongo/shell/shardingtest.js:866
@/mnt/slaves/Linux_32bit/mongo/jstests/sharding/array_shard_key.js:102
Correctly threw error on sharding with multikey index.
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
m30001| Thu Jun 14 01:25:03 [conn3] build index array_shard_key.foo23 { _id: 1 }
m30001| Thu Jun 14 01:25:03 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:25:03 [conn3] build index array_shard_key.foo23 { _id: 1.0, i: 1.0 }
m30001| Thu Jun 14 01:25:03 [conn3] build index done. scanned 10 total records. 0 secs
m30999| Thu Jun 14 01:25:03 [conn] CMD: shardcollection: { shardcollection: "array_shard_key.foo23", key: { _id: 1.0, i: 1.0 } }
m30999| Thu Jun 14 01:25:03 [conn] enable sharding on: array_shard_key.foo23 with shard key: { _id: 1.0, i: 1.0 }
m30999| Thu Jun 14 01:25:03 [conn] going to create 1 chunk(s) for: array_shard_key.foo23 using new epoch 4fd975afba14fb605be38f0d
m30999| Thu Jun 14 01:25:03 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 0ms sequenceNumber: 5 version: 1|0||4fd975afba14fb605be38f0d based on: (empty)
m30001| Thu Jun 14 01:25:03 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:25:03 [conn] splitting: array_shard_key.foo23 shard: ns:array_shard_key.foo23 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey, i: MinKey } max: { _id: MaxKey, i: MaxKey }
m30001| Thu Jun 14 01:25:03 [conn4] received splitChunk request: { splitChunk: "array_shard_key.foo23", keyPattern: { _id: 1.0, i: 1.0 }, min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 } ], shardId: "array_shard_key.foo23-_id_MinKeyi_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:25:03 [conn4] created new distributed lock for array_shard_key.foo23 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:25:03 [conn4] distributed lock 'array_shard_key.foo23/domU-12-31-39-01-70-B4:30001:1339651498:76623404' acquired, ts : 4fd975af07f86c2d8245d515
m30001| Thu Jun 14 01:25:03 [conn4] splitChunk accepted at version 1|0||4fd975afba14fb605be38f0d
m30001| Thu Jun 14 01:25:03 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:03-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651503565), what: "split", ns: "array_shard_key.foo23", details: { before: { min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey, i: MinKey }, max: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975afba14fb605be38f0d') }, right: { min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd975afba14fb605be38f0d') } } }
m30001| Thu Jun 14 01:25:03 [conn4] distributed lock 'array_shard_key.foo23/domU-12-31-39-01-70-B4:30001:1339651498:76623404' unlocked.
m30999| Thu Jun 14 01:25:03 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 0ms sequenceNumber: 6 version: 1|2||4fd975afba14fb605be38f0d based on: 1|0||4fd975afba14fb605be38f0d
m30999| Thu Jun 14 01:25:03 [conn] CMD: movechunk: { movechunk: "array_shard_key.foo23", find: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:25:03 [conn] moving chunk ns: array_shard_key.foo23 moving ( ns:array_shard_key.foo23 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 } max: { _id: MaxKey, i: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:25:03 [conn4] received moveChunk request: { moveChunk: "array_shard_key.foo23", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo23-_id_ObjectId('4fd975af30b40389751f3939')i_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:25:03 [conn4] created new distributed lock for array_shard_key.foo23 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:25:03 [conn4] distributed lock 'array_shard_key.foo23/domU-12-31-39-01-70-B4:30001:1339651498:76623404' acquired, ts : 4fd975af07f86c2d8245d516
m30001| Thu Jun 14 01:25:03 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:03-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651503569), what: "moveChunk.start", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:25:03 [conn4] moveChunk request accepted at version 1|2||4fd975afba14fb605be38f0d
m30001| Thu Jun 14 01:25:03 [conn4] moveChunk number of documents: 0
m30000| Thu Jun 14 01:25:03 [migrateThread] build index array_shard_key.foo23 { _id: 1 }
m30000| Thu Jun 14 01:25:03 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:25:03 [migrateThread] info: creating collection array_shard_key.foo23 on add index
m30000| Thu Jun 14 01:25:03 [migrateThread] build index array_shard_key.foo23 { _id: 1.0, i: 1.0 }
m30000| Thu Jun 14 01:25:03 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:25:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo23' { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30001| Thu Jun 14 01:25:04 [conn4] moveChunk data transfer progress: { active: true, ns: "array_shard_key.foo23", from: "localhost:30001", min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:25:04 [conn4] moveChunk setting version to: 2|0||4fd975afba14fb605be38f0d
m30000| Thu Jun 14 01:25:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo23' { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Thu Jun 14 01:25:04 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:04-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651504583), what: "moveChunk.to", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1012 } }
m30001| Thu Jun 14 01:25:04 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "array_shard_key.foo23", from: "localhost:30001", min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:25:04 [conn4] moveChunk updating self version to: 2|1||4fd975afba14fb605be38f0d through { _id: MinKey, i: MinKey } -> { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 } for collection 'array_shard_key.foo23'
m30001| Thu Jun 14 01:25:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:04-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651504587), what: "moveChunk.commit", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:25:04 [conn4] doing delete inline
m30001| Thu Jun 14 01:25:04 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:25:04 [conn4] distributed lock 'array_shard_key.foo23/domU-12-31-39-01-70-B4:30001:1339651498:76623404' unlocked.
m30001| Thu Jun 14 01:25:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:25:04-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52378", time: new Date(1339651504588), what: "moveChunk.from", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:25:04 [conn4] command admin.$cmd command: { moveChunk: "array_shard_key.foo23", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd975af30b40389751f3939'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo23-_id_ObjectId('4fd975af30b40389751f3939')i_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:766 w:1155079 reslen:37 1020ms
m30999| Thu Jun 14 01:25:04 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 0ms sequenceNumber: 7 version: 2|1||4fd975afba14fb605be38f0d based on: 1|2||4fd975afba14fb605be38f0d
{ "millis" : 1021, "ok" : 1 }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd975a930b40389751f391f"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
array_shard_key.foo23 chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd975af30b40389751f3939"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd975af30b40389751f3939"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
m30999| Thu Jun 14 01:25:04 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:25:04 [conn3] end connection 127.0.0.1:53890 (9 connections now open)
m30000| Thu Jun 14 01:25:04 [conn5] end connection 127.0.0.1:53895 (8 connections now open)
m30000| Thu Jun 14 01:25:04 [conn6] end connection 127.0.0.1:53899 (7 connections now open)
m30002| Thu Jun 14 01:25:04 [conn3] end connection 127.0.0.1:59732 (2 connections now open)
m30001| Thu Jun 14 01:25:04 [conn3] end connection 127.0.0.1:52376 (4 connections now open)
m30001| Thu Jun 14 01:25:04 [conn4] end connection 127.0.0.1:52378 (3 connections now open)
Thu Jun 14 01:25:05 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:25:05 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:25:05 [interruptThread] now exiting
m30000| Thu Jun 14 01:25:05 dbexit:
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:25:05 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:25:05 [interruptThread] closing listening socket: 15
m30000| Thu Jun 14 01:25:05 [interruptThread] closing listening socket: 16
m30000| Thu Jun 14 01:25:05 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:25:05 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:25:05 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:25:05 dbexit: really exiting now
m30001| Thu Jun 14 01:25:05 [conn5] end connection 127.0.0.1:52382 (2 connections now open)
Thu Jun 14 01:25:06 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:25:06 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:25:06 [interruptThread] now exiting
m30001| Thu Jun 14 01:25:06 dbexit:
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:25:06 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:25:06 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:25:06 [interruptThread] closing listening socket: 19
m30001| Thu Jun 14 01:25:06 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:25:06 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:25:06 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:25:06 dbexit: really exiting now
Thu Jun 14 01:25:07 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:25:07 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:25:07 [interruptThread] now exiting
m30002| Thu Jun 14 01:25:07 dbexit:
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:25:07 [interruptThread] closing listening socket: 20
m30002| Thu Jun 14 01:25:07 [interruptThread] closing listening socket: 21
m30002| Thu Jun 14 01:25:07 [interruptThread] closing listening socket: 22
m30002| Thu Jun 14 01:25:07 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:25:07 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:25:07 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:25:07 dbexit: really exiting now
Thu Jun 14 01:25:08 shell: stopped mongo program on port 30002
*** ShardingTest array_shard_key completed successfully in 12.629 seconds ***
12680.814028ms
Thu Jun 14 01:25:08 [initandlisten] connection accepted from 127.0.0.1:42323 #7 (6 connections now open)
*******************************************
Test : auth.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/auth.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/auth.js";TestData.testFile = "auth.js";TestData.testName = "auth";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:25:08 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auth1-config0'
Thu Jun 14 01:25:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/auth1-config0 --keyFile jstests/libs/key1
m29000| Thu Jun 14 01:25:08
m29000| Thu Jun 14 01:25:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:25:08
m29000| Thu Jun 14 01:25:08 [initandlisten] MongoDB starting : pid=21476 port=29000 dbpath=/data/db/auth1-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:25:08 [initandlisten]
m29000| Thu Jun 14 01:25:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:25:08 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:25:08 [initandlisten]
m29000| Thu Jun 14 01:25:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:25:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:25:08 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:25:08 [initandlisten]
m29000| Thu Jun 14 01:25:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:25:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:25:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:25:08 [initandlisten] options: { dbpath: "/data/db/auth1-config0", keyFile: "jstests/libs/key1", port: 29000 }
m29000| Thu Jun 14 01:25:08 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:25:08 [websvr] admin web console waiting for connections on port 30000
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:25:08 [initandlisten] connection accepted from 127.0.0.1:46273 #1 (1 connection now open)
m29000| Thu Jun 14 01:25:08 [conn1] note: no users configured in admin.system.users, allowing localhost access
ShardingTest auth1 :
{ "config" : "domU-12-31-39-01-70-B4:29000", "shards" : [ ] }
Thu Jun 14 01:25:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000 --keyFile jstests/libs/key1
m30999| Thu Jun 14 01:25:08 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:25:08 [mongosMain] MongoS version 2.1.2-pre- starting: pid=21490 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:25:08 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:25:08 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:25:08 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", keyFile: "jstests/libs/key1", port: 30999 }
m29000| Thu Jun 14 01:25:08 [initandlisten] connection accepted from 10.255.119.66:36512 #2 (2 connections now open)
m29000| Thu Jun 14 01:25:08 [initandlisten] connection accepted from 10.255.119.66:36514 #3 (3 connections now open)
m29000| Thu Jun 14 01:25:08 [conn3] authenticate db: local { authenticate: 1, nonce: "bd8386e647a0b997", user: "__system", key: "6ff8abf0df8445ab003403447054a2dc" }
m29000| Thu Jun 14 01:25:08 [initandlisten] connection accepted from 10.255.119.66:36515 #4 (4 connections now open)
m29000| Thu Jun 14 01:25:08 [initandlisten] connection accepted from 10.255.119.66:36516 #5 (5 connections now open)
m29000| Thu Jun 14 01:25:08 [conn4] authenticate db: local { authenticate: 1, nonce: "2f943a4637c0f17b", user: "__system", key: "5deb50fdb1f130cca127af3327d21091" }
m29000| Thu Jun 14 01:25:08 [conn5] authenticate db: local { authenticate: 1, nonce: "8a9b797981a9c8e9", user: "__system", key: "32e9bef2deb93e6af0e4e568fe59367e" }
m29000| Thu Jun 14 01:25:08 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:25:08 [FileAllocator] creating directory /data/db/auth1-config0/_tmp
m29000| Thu Jun 14 01:25:09 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.ns, size: 16MB, took 0.24 secs
m29000| Thu Jun 14 01:25:09 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:25:09 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.0, size: 16MB, took 0.241 secs
m29000| Thu Jun 14 01:25:09 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:25:09 [conn5] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn5] insert config.version keyUpdates:0 locks(micros) w:493395 493ms
m29000| Thu Jun 14 01:25:09 [conn3] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:25:09 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:25:09 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:25:09 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:25:09 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:25:09 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:25:09 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:25:09
m30999| Thu Jun 14 01:25:09 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:25:09 [initandlisten] connection accepted from 10.255.119.66:36519 #6 (6 connections now open)
m29000| Thu Jun 14 01:25:09 [conn6] authenticate db: local { authenticate: 1, nonce: "7425a3839a7cd521", user: "__system", key: "88ee188c40b3d1ba6cdf500c746ea4d4" }
m29000| Thu Jun 14 01:25:09 [conn5] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:25:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975b52fbdcaaf7b2c0729
m30999| Thu Jun 14 01:25:09 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339651509:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:25:09 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:09 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:25:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m30999| Thu Jun 14 01:25:09 [websvr] admin web console waiting for connections on port 31999
logging in first, if there was an unclean shutdown the user might already exist
m30999| Thu Jun 14 01:25:09 [mongosMain] connection accepted from 127.0.0.1:52769 #1 (1 connection now open)
m30999| Thu Jun 14 01:25:09 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:25:09 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:25:09 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:25:09 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:25:09 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "146eeb504cae8c4f", key: "72f7633b837f42f421bae995a3324c2f" }
m30999| Thu Jun 14 01:25:09 [conn] auth: couldn't find user foo, admin.system.users
{ "ok" : 0, "errmsg" : "auth fails" }
m29000| Thu Jun 14 01:25:09 [initandlisten] connection accepted from 10.255.119.66:36521 #7 (7 connections now open)
m29000| Thu Jun 14 01:25:09 [conn7] authenticate db: local { authenticate: 1, nonce: "e03654b89ba223d5", user: "__system", key: "6626ba976af85442463ab2f16b72a7e4" }
m30999| Thu Jun 14 01:25:09 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd975b52fbdcaaf7b2c0728
m30999| Thu Jun 14 01:25:09 [conn] note: no users configured in admin.system.users, allowing localhost access
adding user
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd975b56f2560a998175b68")
}
m29000| Thu Jun 14 01:25:10 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.1, size: 32MB, took 0.626 secs
m29000| Thu Jun 14 01:25:10 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.ns, filling with zeroes...
m29000| Thu Jun 14 01:25:10 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.ns, size: 16MB, took 0.309 secs
m29000| Thu Jun 14 01:25:10 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.0, filling with zeroes...
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
{
"singleShard" : "domU-12-31-39-01-70-B4:29000",
"updatedExisting" : true,
"n" : 1,
"connectionId" : 7,
"err" : null,
"ok" : 1
}
[ { "_id" : "chunksize", "value" : 1 } ]
restart mongos
Thu Jun 14 01:25:10 No db started on port: 31000
Thu Jun 14 01:25:10 shell: stopped mongo program on port 31000
Thu Jun 14 01:25:10 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 31000 --configdb domU-12-31-39-01-70-B4:29000 --keyFile jstests/libs/key1 --chunkSize 1
m29000| Thu Jun 14 01:25:10 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.0, size: 16MB, took 0.262 secs
m29000| Thu Jun 14 01:25:10 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.1, filling with zeroes...
m29000| Thu Jun 14 01:25:10 [conn7] build index admin.system.users { _id: 1 }
m29000| Thu Jun 14 01:25:10 [conn7] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:25:10 [conn7] insert admin.system.users keyUpdates:0 locks(micros) W:995 r:220 w:1031619 1031ms
m30999| Thu Jun 14 01:25:10 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "77baf9fcb4e35a91", key: "8979002c886412ee721c078392b03359" }
m31000| Thu Jun 14 01:25:10 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m31000| Thu Jun 14 01:25:10 [mongosMain] MongoS version 2.1.2-pre- starting: pid=21515 port=31000 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m31000| Thu Jun 14 01:25:10 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31000| Thu Jun 14 01:25:10 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31000| Thu Jun 14 01:25:10 [mongosMain] options: { chunkSize: 1, configdb: "domU-12-31-39-01-70-B4:29000", keyFile: "jstests/libs/key1", port: 31000 }
m29000| Thu Jun 14 01:25:10 [initandlisten] connection accepted from 10.255.119.66:36523 #8 (8 connections now open)
m29000| Thu Jun 14 01:25:10 [conn8] authenticate db: local { authenticate: 1, nonce: "797be149de9d8a87", user: "__system", key: "8b89f5153f196fa8559b94ebf9ec95bb" }
m31000| Thu Jun 14 01:25:10 [mongosMain] waiting for connections on port 31000
m31000| Thu Jun 14 01:25:10 [websvr] admin web console waiting for connections on port 32000
m31000| Thu Jun 14 01:25:10 [Balancer] about to contact config servers and shards
m29000| Thu Jun 14 01:25:10 [initandlisten] connection accepted from 10.255.119.66:36524 #9 (9 connections now open)
m29000| Thu Jun 14 01:25:10 [conn9] authenticate db: local { authenticate: 1, nonce: "cabf8cd60b915f8e", user: "__system", key: "54edf56e57b3ee8114704bf5bdd2793c" }
m31000| Thu Jun 14 01:25:10 [Balancer] config servers and shards contacted successfully
m31000| Thu Jun 14 01:25:10 [Balancer] balancer id: domU-12-31-39-01-70-B4:31000 started at Jun 14 01:25:10
m31000| Thu Jun 14 01:25:10 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:25:10 [initandlisten] connection accepted from 10.255.119.66:36525 #10 (10 connections now open)
m29000| Thu Jun 14 01:25:10 [conn10] authenticate db: local { authenticate: 1, nonce: "ddf04dcfe3b62273", user: "__system", key: "4fcd3b37df0110b60be5ebc7d0bc26ac" }
m31000| Thu Jun 14 01:25:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975b644bfbb7b7d56821c
m31000| Thu Jun 14 01:25:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31000| Thu Jun 14 01:25:10 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:31000:1339651510:1804289383 (sleeping for 30000ms)
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-0'
m31000| Thu Jun 14 01:25:10 [mongosMain] connection accepted from 127.0.0.1:33969 #1 (1 connection now open)
m31000| Thu Jun 14 01:25:10 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "5df6201e25489fb", key: "ec7d87375877ee386a6ab1bd895b8a2f" }
m29000| Thu Jun 14 01:25:11 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.1, size: 32MB, took 0.717 secs
Thu Jun 14 01:25:11 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31100 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:25:11
m31100| Thu Jun 14 01:25:11 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:25:11
m31100| Thu Jun 14 01:25:11 [initandlisten] MongoDB starting : pid=21532 port=31100 dbpath=/data/db/d1-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:25:11 [initandlisten]
m31100| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:25:11 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:25:11 [initandlisten]
m31100| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:25:11 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:25:11 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:25:11 [initandlisten]
m31100| Thu Jun 14 01:25:11 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:25:11 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:25:11 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:25:11 [initandlisten] options: { dbpath: "/data/db/d1-0", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31100, replSet: "d1", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:25:11 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:25:11 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:25:11 [initandlisten] connection accepted from 10.255.119.66:47606 #1 (1 connection now open)
m31100| Thu Jun 14 01:25:11 [conn1] authenticate db: local { authenticate: 1, nonce: "5ed7de5f82ad1bd3", user: "__system", key: "6acf0ea0d6a3b660bd34066d6b1470cd" }
m31100| Thu Jun 14 01:25:11 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:25:11 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-1'
m31100| Thu Jun 14 01:25:11 [initandlisten] connection accepted from 127.0.0.1:60417 #2 (2 connections now open)
m31100| Thu Jun 14 01:25:11 [conn2] note: no users configured in admin.system.users, allowing localhost access
Thu Jun 14 01:25:11 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31101 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:25:11
m31101| Thu Jun 14 01:25:11 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:25:11
m31101| Thu Jun 14 01:25:11 [initandlisten] MongoDB starting : pid=21548 port=31101 dbpath=/data/db/d1-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:25:11 [initandlisten]
m31101| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:25:11 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:25:11 [initandlisten]
m31101| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:25:11 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:25:11 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:25:11 [initandlisten]
m31101| Thu Jun 14 01:25:11 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:25:11 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:25:11 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:25:11 [initandlisten] options: { dbpath: "/data/db/d1-1", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31101, replSet: "d1", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:25:11 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:25:11 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:25:11 [initandlisten] connection accepted from 10.255.119.66:40363 #1 (1 connection now open)
m31101| Thu Jun 14 01:25:11 [conn1] authenticate db: local { authenticate: 1, nonce: "d028a626c252c9bd", user: "__system", key: "8b4858d26ee338cec431119d5dfc5c38" }
m31101| Thu Jun 14 01:25:11 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:25:11 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Thu Jun 14 01:25:11 [initandlisten] connection accepted from 127.0.0.1:48327 #2 (2 connections now open)
m31101| Thu Jun 14 01:25:11 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-2'
Thu Jun 14 01:25:11 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31102 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:25:11
m31102| Thu Jun 14 01:25:11 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:25:11
m31102| Thu Jun 14 01:25:11 [initandlisten] MongoDB starting : pid=21564 port=31102 dbpath=/data/db/d1-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:25:11 [initandlisten]
m31102| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:25:11 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:25:11 [initandlisten]
m31102| Thu Jun 14 01:25:11 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:25:11 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:25:11 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:25:11 [initandlisten]
m31102| Thu Jun 14 01:25:11 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:25:11 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:25:11 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:25:11 [initandlisten] options: { dbpath: "/data/db/d1-2", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "d1", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:25:11 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:25:11 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:25:11 [initandlisten] connection accepted from 10.255.119.66:45841 #1 (1 connection now open)
m31102| Thu Jun 14 01:25:11 [conn1] authenticate db: local { authenticate: 1, nonce: "dcb218a6c4fb8479", user: "__system", key: "81d033daaa5329ef45d65e85fa55b94e" }
m31102| Thu Jun 14 01:25:11 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:25:11 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Thu Jun 14 01:25:12 [initandlisten] connection accepted from 127.0.0.1:38618 #2 (2 connections now open)
m31102| Thu Jun 14 01:25:12 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
{
"replSetInitiate" : {
"_id" : "d1",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
}
m31100| Thu Jun 14 01:25:12 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:25:12 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:25:12 [initandlisten] connection accepted from 10.255.119.66:40368 #3 (3 connections now open)
m31101| Thu Jun 14 01:25:12 [conn3] authenticate db: local { authenticate: 1, nonce: "a736ee76adfd4c91", user: "__system", key: "db8a467c99aede87e6128f04b9a304c0" }
m31102| Thu Jun 14 01:25:12 [initandlisten] connection accepted from 10.255.119.66:45844 #3 (3 connections now open)
m31102| Thu Jun 14 01:25:12 [conn3] authenticate db: local { authenticate: 1, nonce: "a8189eb2228db192", user: "__system", key: "54d7cc5e7f4edbf2539a527c14ee03ff" }
m31100| Thu Jun 14 01:25:12 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:25:12 [conn2] ******
m31100| Thu Jun 14 01:25:12 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:25:12 [FileAllocator] allocating new datafile /data/db/d1-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:25:12 [FileAllocator] creating directory /data/db/d1-0/_tmp
m31100| Thu Jun 14 01:25:12 [FileAllocator] done allocating datafile /data/db/d1-0/local.ns, size: 16MB, took 0.222 secs
m31100| Thu Jun 14 01:25:12 [FileAllocator] allocating new datafile /data/db/d1-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:25:13 [FileAllocator] done allocating datafile /data/db/d1-0/local.0, size: 64MB, took 1.238 secs
m31100| Thu Jun 14 01:25:13 [conn2] ******
m31100| Thu Jun 14 01:25:13 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:25:13 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:25:13 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:25:13 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1503551 r:98 w:36 reslen:112 1504ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
initiated
m30999| Thu Jun 14 01:25:19 [Balancer] MaxChunkSize changing from 64MB to 1MB
m30999| Thu Jun 14 01:25:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975bf2fbdcaaf7b2c072a
m30999| Thu Jun 14 01:25:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31000| Thu Jun 14 01:25:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975c044bfbb7b7d56821d
m31000| Thu Jun 14 01:25:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:25:21 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:21 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:25:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:25:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:25:21 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:25:21 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31100| Thu Jun 14 01:25:21 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31101| Thu Jun 14 01:25:21 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:21 [initandlisten] connection accepted from 10.255.119.66:47616 #3 (3 connections now open)
m31100| Thu Jun 14 01:25:21 [conn3] authenticate db: local { authenticate: 1, nonce: "b0b96df73079e480", user: "__system", key: "e23582c57f157ef0ca531786f537ef4f" }
m31101| Thu Jun 14 01:25:21 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:25:21 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:25:21 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:25:21 [FileAllocator] allocating new datafile /data/db/d1-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:25:21 [FileAllocator] creating directory /data/db/d1-1/_tmp
m31102| Thu Jun 14 01:25:21 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:21 [initandlisten] connection accepted from 10.255.119.66:47617 #4 (4 connections now open)
m31100| Thu Jun 14 01:25:21 [conn4] authenticate db: local { authenticate: 1, nonce: "8ab9b78bfac9b866", user: "__system", key: "34c2b8c5901d5113644b74c0e7e76ff2" }
m31102| Thu Jun 14 01:25:21 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:25:21 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:25:21 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:25:21 [FileAllocator] allocating new datafile /data/db/d1-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:25:21 [FileAllocator] creating directory /data/db/d1-2/_tmp
m31101| Thu Jun 14 01:25:21 [FileAllocator] done allocating datafile /data/db/d1-1/local.ns, size: 16MB, took 0.222 secs
m31101| Thu Jun 14 01:25:21 [FileAllocator] allocating new datafile /data/db/d1-1/local.0, filling with zeroes...
m31102| Thu Jun 14 01:25:22 [FileAllocator] done allocating datafile /data/db/d1-2/local.ns, size: 16MB, took 0.598 secs
m31101| Thu Jun 14 01:25:22 [FileAllocator] done allocating datafile /data/db/d1-1/local.0, size: 16MB, took 0.618 secs
m31101| Thu Jun 14 01:25:22 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:25:22 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:25:22 [rsSync] ******
m31101| Thu Jun 14 01:25:22 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:25:22 [FileAllocator] allocating new datafile /data/db/d1-1/local.1, filling with zeroes...
m31102| Thu Jun 14 01:25:22 [FileAllocator] allocating new datafile /data/db/d1-2/local.0, filling with zeroes...
m31102| Thu Jun 14 01:25:23 [FileAllocator] done allocating datafile /data/db/d1-2/local.0, size: 16MB, took 0.884 secs
m31100| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:25:23 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
m31102| Thu Jun 14 01:25:23 [rsStart] replSet saveConfigLocally done
m31102| Thu Jun 14 01:25:23 [rsStart] replSet STARTUP2
m31102| Thu Jun 14 01:25:23 [rsSync] ******
m31102| Thu Jun 14 01:25:23 [rsSync] creating replication oplog of size: 40MB...
m31102| Thu Jun 14 01:25:23 [FileAllocator] allocating new datafile /data/db/d1-2/local.1, filling with zeroes...
m31101| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:25:23 [initandlisten] connection accepted from 10.255.119.66:45847 #4 (4 connections now open)
m31102| Thu Jun 14 01:25:23 [conn4] authenticate db: local { authenticate: 1, nonce: "513ccd3a88a11833", user: "__system", key: "e9d3c8a5b631b84169c31ad1887e5f55" }
m31101| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:25:23 [initandlisten] connection accepted from 10.255.119.66:40373 #4 (4 connections now open)
m31101| Thu Jun 14 01:25:23 [conn4] authenticate db: local { authenticate: 1, nonce: "3f7b87a041b24114", user: "__system", key: "61cd02a76e3d36e4ce5b1b0a520004ae" }
m31102| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:25:23 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31101| Thu Jun 14 01:25:24 [FileAllocator] done allocating datafile /data/db/d1-1/local.1, size: 64MB, took 1.534 secs
m31102| Thu Jun 14 01:25:25 [FileAllocator] done allocating datafile /data/db/d1-2/local.1, size: 64MB, took 1.513 secs
m31101| Thu Jun 14 01:25:25 [rsSync] ******
m31101| Thu Jun 14 01:25:25 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:25:25 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Thu Jun 14 01:25:25 [rsSync] ******
m31102| Thu Jun 14 01:25:25 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:25:25 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:25:25 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:25:25 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m30999| Thu Jun 14 01:25:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975c92fbdcaaf7b2c072b
m30999| Thu Jun 14 01:25:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31000| Thu Jun 14 01:25:30 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975ca44bfbb7b7d56821e
m31000| Thu Jun 14 01:25:30 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:25:31 [rsMgr] replSet info electSelf 0
m31102| Thu Jun 14 01:25:31 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:25:31 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31101| Thu Jun 14 01:25:31 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:25:31 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:25:31 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:25:31 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:25:31 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31102| Thu Jun 14 01:25:31 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31102| Thu Jun 14 01:25:31 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
adding shard w/out auth d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m29000| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:36542 #11 (11 connections now open)
m29000| Thu Jun 14 01:25:33 [conn11] authenticate db: local { authenticate: 1, nonce: "f89ccc329b9f14a", user: "__system", key: "6bc04bfea905faa2aa3970bb46a4848f" }
m31000| Thu Jun 14 01:25:33 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd975b644bfbb7b7d56821b
{
"note" : "need to authorized on db: admin for command: addShard",
"ok" : 0,
"errmsg" : "unauthorized"
}
m31000| Thu Jun 14 01:25:33 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "7ca6f630a929ae7a", key: "60b717967eca3840f02fba8e44329262" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
adding shard w/wrong key d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:25:33 [conn] starting new replica set monitor for replica set d1 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:25:33 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set d1
m30999| Thu Jun 14 01:25:33 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from d1/
m30999| Thu Jun 14 01:25:33 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set d1
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47621 #5 (5 connections now open)
m30999| Thu Jun 14 01:25:33 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set d1
m30999| Thu Jun 14 01:25:33 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set d1
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47622 #6 (6 connections now open)
m30999| Thu Jun 14 01:25:33 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set d1
m30999| Thu Jun 14 01:25:33 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set d1
m31101| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:40377 #5 (5 connections now open)
m31102| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:45853 #5 (5 connections now open)
m30999| Thu Jun 14 01:25:33 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set d1
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47625 #7 (7 connections now open)
m31100| Thu Jun 14 01:25:33 [conn7] authenticate db: local { authenticate: 1, nonce: "80396a95fe509f0", user: "__system", key: "00fb061f1312c83f3b2baa48b5987d50" }
m31100| Thu Jun 14 01:25:33 [conn7] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn7] end connection 10.255.119.66:47625 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47626 #8 (7 connections now open)
m31100| Thu Jun 14 01:25:33 [conn5] end connection 10.255.119.66:47621 (5 connections now open)
m31100| Thu Jun 14 01:25:33 [conn8] authenticate db: local { authenticate: 1, nonce: "a978a2976ec59b2c", user: "__system", key: "38d6c6167528f6ed25513c02bc6e3e82" }
m31100| Thu Jun 14 01:25:33 [conn8] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn8] end connection 10.255.119.66:47626 (5 connections now open)
m30999| Thu Jun 14 01:25:33 [conn] Primary for replica set d1 changed to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47627 #9 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [conn9] authenticate db: local { authenticate: 1, nonce: "e31f531d27853708", user: "__system", key: "59ee9b4a728a59df4dbcefce57e92fa3" }
m31100| Thu Jun 14 01:25:33 [conn9] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn9] end connection 10.255.119.66:47627 (5 connections now open)
m31101| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:40382 #6 (6 connections now open)
m31101| Thu Jun 14 01:25:33 [conn6] authenticate db: local { authenticate: 1, nonce: "d95c1f0105010376", user: "__system", key: "68921aae2efe3ea014fb512e9827f95e" }
m31101| Thu Jun 14 01:25:33 [conn6] auth: key mismatch __system, ns:local
m31101| Thu Jun 14 01:25:33 [conn6] end connection 10.255.119.66:40382 (5 connections now open)
m31102| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:45858 #6 (6 connections now open)
m31102| Thu Jun 14 01:25:33 [conn6] authenticate db: local { authenticate: 1, nonce: "7e78bc0e77651ad9", user: "__system", key: "af5170bd38419a4b549b36ede6795d4e" }
m31102| Thu Jun 14 01:25:33 [conn6] auth: key mismatch __system, ns:local
m30999| Thu Jun 14 01:25:33 [conn] replica set monitor for replica set d1 started, address is d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:25:33 [ReplicaSetMonitorWatcher] starting
m31102| Thu Jun 14 01:25:33 [conn6] end connection 10.255.119.66:45858 (5 connections now open)
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47630 #10 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [conn10] authenticate db: local { authenticate: 1, nonce: "a8fa019e3fa09e34", user: "__system", key: "aec8734750e9d2a7bea3e6f0a0076650" }
m31100| Thu Jun 14 01:25:33 [conn10] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn10] end connection 10.255.119.66:47630 (5 connections now open)
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47631 #11 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [conn11] authenticate db: local { authenticate: 1, nonce: "3224bef1cda4537a", user: "__system", key: "610acae3fee0c453f8b7d441d6f4034e" }
m31100| Thu Jun 14 01:25:33 [conn11] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn11] end connection 10.255.119.66:47631 (5 connections now open)
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47632 #12 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [initandlisten] connection accepted from 10.255.119.66:47633 #13 (7 connections now open)
m31100| Thu Jun 14 01:25:33 [conn13] authenticate db: local { authenticate: 1, nonce: "105e2186c7d2634c", user: "__system", key: "6a43526b1775bdc0c518c2ebb7c6e784" }
m31100| Thu Jun 14 01:25:33 [conn13] auth: key mismatch __system, ns:local
m31100| Thu Jun 14 01:25:33 [conn13] end connection 10.255.119.66:47633 (6 connections now open)
m31100| Thu Jun 14 01:25:33 [conn12] authenticate db: local { authenticate: 1, nonce: "9fe55cf482af438e", user: "__system", key: "14afbc0fdf09eca9173b2887686f1ae3" }
m31100| Thu Jun 14 01:25:33 [conn12] auth: key mismatch __system, ns:local
"command {\n\t\"addShard\" : \"d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102\"\n} failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"couldn't connect to new shard can't authenticate to shard server\"\n}"
start rs w/correct key
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:25:33 [conn12] end connection 10.255.119.66:47632 (5 connections now open)
m31100| Thu Jun 14 01:25:33 [conn6] end connection 10.255.119.66:47622 (4 connections now open)
m30999| Thu Jun 14 01:25:33 [conn] deleting replica set monitor for: d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:25:33 [conn] addshard request { addShard: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" } failed: couldn't connect to new shard can't authenticate to shard server
m31101| Thu Jun 14 01:25:33 [conn5] end connection 10.255.119.66:40377 (4 connections now open)
m31102| Thu Jun 14 01:25:33 [conn5] end connection 10.255.119.66:45853 (4 connections now open)
m31100| Thu Jun 14 01:25:33 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:25:33 [interruptThread] now exiting
m31100| Thu Jun 14 01:25:33 dbexit:
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:25:33 [interruptThread] closing listening socket: 26
m31100| Thu Jun 14 01:25:33 [interruptThread] closing listening socket: 27
m31100| Thu Jun 14 01:25:33 [interruptThread] closing listening socket: 29
m31100| Thu Jun 14 01:25:33 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:25:33 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:25:33 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:25:33 dbexit: really exiting now
m31101| Thu Jun 14 01:25:33 [conn3] end connection 10.255.119.66:40368 (3 connections now open)
m31102| Thu Jun 14 01:25:33 [conn3] end connection 10.255.119.66:45844 (3 connections now open)
m31101| Thu Jun 14 01:25:33 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:25:33 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:25:33 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31102| Thu Jun 14 01:25:33 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:25:33 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:25:33 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
Thu Jun 14 01:25:34 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:25:34 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:25:34 [interruptThread] now exiting
m31101| Thu Jun 14 01:25:34 dbexit:
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:25:34 [interruptThread] closing listening socket: 29
m31101| Thu Jun 14 01:25:34 [interruptThread] closing listening socket: 30
m31101| Thu Jun 14 01:25:34 [interruptThread] closing listening socket: 32
m31101| Thu Jun 14 01:25:34 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:25:34 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:25:34 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:25:34 dbexit: really exiting now
m31102| Thu Jun 14 01:25:34 [conn4] end connection 10.255.119.66:45847 (2 connections now open)
Thu Jun 14 01:25:35 shell: stopped mongo program on port 31101
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
ReplSetTest stop *** Shutting down mongod in port 31102 ***
m31102| Thu Jun 14 01:25:35 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:25:35 [interruptThread] now exiting
m31102| Thu Jun 14 01:25:35 dbexit:
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:25:35 [interruptThread] closing listening socket: 32
m31102| Thu Jun 14 01:25:35 [interruptThread] closing listening socket: 33
m31102| Thu Jun 14 01:25:35 [interruptThread] closing listening socket: 35
m31102| Thu Jun 14 01:25:35 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:25:35 [conn1] end connection 10.255.119.66:45841 (1 connection now open)
m31102| Thu Jun 14 01:25:35 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:25:35 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:25:35 dbexit: really exiting now
Thu Jun 14 01:25:36 shell: stopped mongo program on port 31102
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-0'
Thu Jun 14 01:25:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31100 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:25:36
m31100| Thu Jun 14 01:25:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:25:36
m31100| Thu Jun 14 01:25:36 [initandlisten] MongoDB starting : pid=21658 port=31100 dbpath=/data/db/d1-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:25:36 [initandlisten]
m31100| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:25:36 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:25:36 [initandlisten]
m31100| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:25:36 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:25:36 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:25:36 [initandlisten]
m31100| Thu Jun 14 01:25:36 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:25:36 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:25:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:25:36 [initandlisten] options: { dbpath: "/data/db/d1-0", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31100, replSet: "d1", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:25:36 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:25:36 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 10.255.119.66:47635 #1 (1 connection now open)
m31100| Thu Jun 14 01:25:36 [conn1] authenticate db: local { authenticate: 1, nonce: "e32091cd5bf71111", user: "__system", key: "6c55010b994bcd37ab5293df4e8c9cb2" }
m31100| Thu Jun 14 01:25:36 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:25:36 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 127.0.0.1:60446 #2 (2 connections now open)
m31100| Thu Jun 14 01:25:36 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-1'
Thu Jun 14 01:25:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31101 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:25:36
m31101| Thu Jun 14 01:25:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:25:36
m31101| Thu Jun 14 01:25:36 [initandlisten] MongoDB starting : pid=21674 port=31101 dbpath=/data/db/d1-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:25:36 [initandlisten]
m31101| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:25:36 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:25:36 [initandlisten]
m31101| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:25:36 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:25:36 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:25:36 [initandlisten]
m31101| Thu Jun 14 01:25:36 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:25:36 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:25:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:25:36 [initandlisten] options: { dbpath: "/data/db/d1-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "d1", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:25:36 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:25:36 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 10.255.119.66:40392 #1 (1 connection now open)
m31101| Thu Jun 14 01:25:36 [conn1] authenticate db: local { authenticate: 1, nonce: "ed31b0cc62f674d4", user: "__system", key: "ac4a89d7b87bd779f4f01c698bb2181d" }
m31101| Thu Jun 14 01:25:36 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:25:36 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-2'
Thu Jun 14 01:25:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31102 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-2
m31101| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 127.0.0.1:48356 #2 (2 connections now open)
m31101| Thu Jun 14 01:25:36 [conn2] note: no users configured in admin.system.users, allowing localhost access
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:25:36
m31102| Thu Jun 14 01:25:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:25:36
m31102| Thu Jun 14 01:25:36 [initandlisten] MongoDB starting : pid=21689 port=31102 dbpath=/data/db/d1-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:25:36 [initandlisten]
m31102| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:25:36 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:25:36 [initandlisten]
m31102| Thu Jun 14 01:25:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:25:36 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:25:36 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:25:36 [initandlisten]
m31102| Thu Jun 14 01:25:36 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:25:36 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:25:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:25:36 [initandlisten] options: { dbpath: "/data/db/d1-2", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31102, replSet: "d1", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:25:36 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:25:36 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 10.255.119.66:45870 #1 (1 connection now open)
m31102| Thu Jun 14 01:25:36 [conn1] authenticate db: local { authenticate: 1, nonce: "1c102efa30bd2c70", user: "__system", key: "a6cc3edb1c66a81350709ff553a75db4" }
m31102| Thu Jun 14 01:25:36 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:25:36 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 127.0.0.1:38647 #2 (2 connections now open)
m31102| Thu Jun 14 01:25:36 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
{
"replSetInitiate" : {
"_id" : "d1",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
}
m31100| Thu Jun 14 01:25:36 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:25:36 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 10.255.119.66:40397 #3 (3 connections now open)
m31101| Thu Jun 14 01:25:36 [conn3] authenticate db: local { authenticate: 1, nonce: "a34477106d8c45c5", user: "__system", key: "5a266325b9c70451028b4bb21fe1cebe" }
m31102| Thu Jun 14 01:25:36 [initandlisten] connection accepted from 10.255.119.66:45873 #3 (3 connections now open)
m31102| Thu Jun 14 01:25:36 [conn3] authenticate db: local { authenticate: 1, nonce: "381515f2104d8330", user: "__system", key: "c7fa7f29f67615a0491266e4c4e58475" }
m31100| Thu Jun 14 01:25:36 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:25:36 [conn2] ******
m31100| Thu Jun 14 01:25:36 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:25:36 [FileAllocator] allocating new datafile /data/db/d1-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:25:36 [FileAllocator] creating directory /data/db/d1-0/_tmp
m31100| Thu Jun 14 01:25:37 [FileAllocator] done allocating datafile /data/db/d1-0/local.ns, size: 16MB, took 0.264 secs
m31100| Thu Jun 14 01:25:37 [FileAllocator] allocating new datafile /data/db/d1-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:25:38 [FileAllocator] done allocating datafile /data/db/d1-0/local.0, size: 64MB, took 1.168 secs
m31100| Thu Jun 14 01:25:38 [conn2] ******
m31100| Thu Jun 14 01:25:38 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:25:38 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:25:38 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:25:38 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1483276 r:69 w:35 reslen:112 1480ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Thu Jun 14 01:25:39 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975d32fbdcaaf7b2c072c
m30999| Thu Jun 14 01:25:39 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31000| Thu Jun 14 01:25:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975d444bfbb7b7d56821f
m31000| Thu Jun 14 01:25:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:25:46 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:46 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:25:46 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:25:46 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:25:46 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:25:46 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:46 [initandlisten] connection accepted from 10.255.119.66:47645 #3 (3 connections now open)
m31100| Thu Jun 14 01:25:46 [conn3] authenticate db: local { authenticate: 1, nonce: "5640d053a239292f", user: "__system", key: "b7449512a2b5ce08360f4241b02113aa" }
m31101| Thu Jun 14 01:25:46 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:25:46 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:25:46 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:25:46 [FileAllocator] allocating new datafile /data/db/d1-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:25:46 [FileAllocator] creating directory /data/db/d1-1/_tmp
m31102| Thu Jun 14 01:25:46 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:25:46 [initandlisten] connection accepted from 10.255.119.66:47646 #4 (4 connections now open)
m31100| Thu Jun 14 01:25:46 [conn4] authenticate db: local { authenticate: 1, nonce: "4df2d80034227c7e", user: "__system", key: "e4b5bf698b6ce2171f9f91545aa3fcbc" }
m31102| Thu Jun 14 01:25:46 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:25:46 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:25:46 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:25:46 [FileAllocator] allocating new datafile /data/db/d1-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:25:46 [FileAllocator] creating directory /data/db/d1-2/_tmp
m31101| Thu Jun 14 01:25:46 [FileAllocator] done allocating datafile /data/db/d1-1/local.ns, size: 16MB, took 0.232 secs
m31101| Thu Jun 14 01:25:46 [FileAllocator] allocating new datafile /data/db/d1-1/local.0, filling with zeroes...
m31101| Thu Jun 14 01:25:47 [FileAllocator] done allocating datafile /data/db/d1-1/local.0, size: 16MB, took 0.564 secs
m31102| Thu Jun 14 01:25:47 [FileAllocator] done allocating datafile /data/db/d1-2/local.ns, size: 16MB, took 0.548 secs
m31102| Thu Jun 14 01:25:47 [FileAllocator] allocating new datafile /data/db/d1-2/local.0, filling with zeroes...
m31101| Thu Jun 14 01:25:47 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:25:47 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:25:47 [rsSync] ******
m31101| Thu Jun 14 01:25:47 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:25:47 [FileAllocator] allocating new datafile /data/db/d1-1/local.1, filling with zeroes...
m31102| Thu Jun 14 01:25:47 [FileAllocator] done allocating datafile /data/db/d1-2/local.0, size: 16MB, took 0.3 secs
m31102| Thu Jun 14 01:25:47 [rsStart] replSet saveConfigLocally done
m31102| Thu Jun 14 01:25:47 [rsStart] replSet STARTUP2
m31102| Thu Jun 14 01:25:47 [rsSync] ******
m31102| Thu Jun 14 01:25:47 [rsSync] creating replication oplog of size: 40MB...
m31102| Thu Jun 14 01:25:47 [FileAllocator] allocating new datafile /data/db/d1-2/local.1, filling with zeroes...
m31100| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:25:48 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31100| Thu Jun 14 01:25:48 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31101| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:25:48 [initandlisten] connection accepted from 10.255.119.66:45876 #4 (4 connections now open)
m31102| Thu Jun 14 01:25:48 [conn4] authenticate db: local { authenticate: 1, nonce: "a29ac022e82a23b2", user: "__system", key: "fbec29d5750d9260cff86cdaf4bcc7e6" }
m31101| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:25:48 [initandlisten] connection accepted from 10.255.119.66:40402 #4 (4 connections now open)
m31101| Thu Jun 14 01:25:48 [conn4] authenticate db: local { authenticate: 1, nonce: "540740d7f51cc817", user: "__system", key: "d1fc56a3d3a2b6ffd77779610d097e94" }
m31102| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:25:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m30999| Thu Jun 14 01:25:49 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975dd2fbdcaaf7b2c072d
m30999| Thu Jun 14 01:25:49 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31102| Thu Jun 14 01:25:49 [FileAllocator] done allocating datafile /data/db/d1-2/local.1, size: 64MB, took 2.297 secs
m31101| Thu Jun 14 01:25:49 [FileAllocator] done allocating datafile /data/db/d1-1/local.1, size: 64MB, took 2.448 secs
m31101| Thu Jun 14 01:25:49 [rsSync] ******
m31101| Thu Jun 14 01:25:49 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:25:49 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Thu Jun 14 01:25:50 [rsSync] ******
m31102| Thu Jun 14 01:25:50 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:25:50 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31000| Thu Jun 14 01:25:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975de44bfbb7b7d568220
m31000| Thu Jun 14 01:25:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:25:54 [rsMgr] replSet info electSelf 0
m31102| Thu Jun 14 01:25:54 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:25:54 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31101| Thu Jun 14 01:25:54 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:25:54 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:25:54 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:25:54 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:25:54 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31102| Thu Jun 14 01:25:54 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:25:54 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
adding shard w/auth d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31000| Thu Jun 14 01:25:56 [conn] starting new replica set monitor for replica set d1 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54741 #5 (5 connections now open)
m31000| Thu Jun 14 01:25:56 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set d1
m31000| Thu Jun 14 01:25:56 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from d1/
m31000| Thu Jun 14 01:25:56 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set d1
m31100| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54742 #6 (6 connections now open)
m31000| Thu Jun 14 01:25:56 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set d1
m31000| Thu Jun 14 01:25:56 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set d1
m31000| Thu Jun 14 01:25:56 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set d1
m31000| Thu Jun 14 01:25:56 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set d1
m31101| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:46062 #5 (5 connections now open)
m31000| Thu Jun 14 01:25:56 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set d1
m31102| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54685 #5 (5 connections now open)
m31100| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54745 #7 (7 connections now open)
m31100| Thu Jun 14 01:25:56 [conn7] authenticate db: local { authenticate: 1, nonce: "7c4ea1c94f791a11", user: "__system", key: "74127a59debf1ffc5a358675a1f3d74c" }
m31100| Thu Jun 14 01:25:56 [conn5] end connection 10.255.119.66:54741 (6 connections now open)
m31000| Thu Jun 14 01:25:56 [conn] Primary for replica set d1 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:46065 #6 (6 connections now open)
m31101| Thu Jun 14 01:25:56 [conn6] authenticate db: local { authenticate: 1, nonce: "28db90a0f9a437dd", user: "__system", key: "662af29a712afaf25137de2b416f0b62" }
m31102| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54688 #6 (6 connections now open)
m31102| Thu Jun 14 01:25:56 [conn6] authenticate db: local { authenticate: 1, nonce: "502f636f863b32d9", user: "__system", key: "8a155efed0ee2fe7a341c1fa5c44fec8" }
m31000| Thu Jun 14 01:25:56 [conn] replica set monitor for replica set d1 started, address is d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31000| Thu Jun 14 01:25:56 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54748 #8 (7 connections now open)
m31100| Thu Jun 14 01:25:56 [conn8] authenticate db: local { authenticate: 1, nonce: "93ded5a093abed5a", user: "__system", key: "8ce034e6798a03847dd1114fc6d5d2dc" }
m31000| Thu Jun 14 01:25:56 [conn] going to add shard: { _id: "d1", host: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }
m31000| Thu Jun 14 01:25:56 [conn] couldn't find database [test] in config db
m31000| Thu Jun 14 01:25:56 [conn] put [test] on: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31000| Thu Jun 14 01:25:56 [conn] enabling sharding on: test
m31000| Thu Jun 14 01:25:56 [conn] CMD: shardcollection: { shardCollection: "test.foo", key: { x: 1.0 } }
m31000| Thu Jun 14 01:25:56 [conn] enable sharding on: test.foo with shard key: { x: 1.0 }
m31000| Thu Jun 14 01:25:56 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:25:56 [FileAllocator] allocating new datafile /data/db/d1-0/test.ns, filling with zeroes...
m31000| Thu Jun 14 01:25:56 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd975e444bfbb7b7d568221 based on: (empty)
m29000| Thu Jun 14 01:25:56 [conn8] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:25:56 [conn8] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:54749 #9 (8 connections now open)
m31100| Thu Jun 14 01:25:56 [conn9] authenticate db: local { authenticate: 1, nonce: "9fbf06efb0878bef", user: "__system", key: "ec48ec50192caf09bc3c259def6eda9a" }
m31000| Thu Jun 14 01:25:56 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd975b644bfbb7b7d56821b
m31000| Thu Jun 14 01:25:56 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd975b644bfbb7b7d56821b
m31000| Thu Jun 14 01:25:56 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31102 serverID: 4fd975b644bfbb7b7d56821b
m31100| Thu Jun 14 01:25:56 [FileAllocator] done allocating datafile /data/db/d1-0/test.ns, size: 16MB, took 0.213 secs
m31100| Thu Jun 14 01:25:56 [FileAllocator] allocating new datafile /data/db/d1-0/test.0, filling with zeroes...
m31100| Thu Jun 14 01:25:56 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31100| Thu Jun 14 01:25:56 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:25:56 [FileAllocator] done allocating datafile /data/db/d1-0/test.0, size: 16MB, took 0.289 secs
m31100| Thu Jun 14 01:25:56 [conn8] build index test.foo { _id: 1 }
m31100| Thu Jun 14 01:25:56 [conn8] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:25:56 [conn8] info: creating collection test.foo on add index
m31100| Thu Jun 14 01:25:56 [conn8] build index test.foo { x: 1.0 }
m31100| Thu Jun 14 01:25:56 [conn8] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:25:56 [conn8] insert test.system.indexes keyUpdates:0 locks(micros) R:8 W:73 r:249 w:513724 513ms
m31100| Thu Jun 14 01:25:56 [conn9] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd975b644bfbb7b7d56821b'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:489 reslen:51 502ms
m31100| Thu Jun 14 01:25:56 [conn9] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:25:56 [initandlisten] connection accepted from 10.255.119.66:51837 #12 (12 connections now open)
m29000| Thu Jun 14 01:25:56 [conn12] authenticate db: local { authenticate: 1, nonce: "1e3573489d35f29c", user: "__system", key: "d5424660eeddc4e6c418dbac3f667848" }
ReplSetTest waitForIndicator state on connection to domU-12-31-39-01-70-B4:31101
[ 2 ]
ReplSetTest waitForIndicator from node connection to domU-12-31-39-01-70-B4:31101
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:25:56Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 20,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 10,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:25:56Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 10,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:25:56Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31100, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status for : domU-12-31-39-01-70-B4:31101, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status : 3 target state : 2
Status for : domU-12-31-39-01-70-B4:31102, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:25:59 [Balancer] starting new replica set monitor for replica set d1 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54751 #10 (9 connections now open)
m30999| Thu Jun 14 01:25:59 [Balancer] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set d1
m30999| Thu Jun 14 01:25:59 [Balancer] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from d1/
m30999| Thu Jun 14 01:25:59 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set d1
m30999| Thu Jun 14 01:25:59 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set d1
m30999| Thu Jun 14 01:25:59 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set d1
m31100| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54752 #11 (10 connections now open)
m31101| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:46072 #7 (7 connections now open)
m30999| Thu Jun 14 01:25:59 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set d1
m30999| Thu Jun 14 01:25:59 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set d1
m30999| Thu Jun 14 01:25:59 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set d1
m31102| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54695 #7 (7 connections now open)
m31100| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54755 #12 (11 connections now open)
m31100| Thu Jun 14 01:25:59 [conn12] authenticate db: local { authenticate: 1, nonce: "32d5824fea3641c5", user: "__system", key: "73a02ea94d9d6869ba62d05df993385d" }
m31100| Thu Jun 14 01:25:59 [conn10] end connection 10.255.119.66:54751 (10 connections now open)
m30999| Thu Jun 14 01:25:59 [Balancer] Primary for replica set d1 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:46075 #8 (8 connections now open)
m31101| Thu Jun 14 01:25:59 [conn8] authenticate db: local { authenticate: 1, nonce: "319a2e71f32a8ed2", user: "__system", key: "3ed79728f9e261b666d6158863f39e69" }
m31102| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54698 #8 (8 connections now open)
m31102| Thu Jun 14 01:25:59 [conn8] authenticate db: local { authenticate: 1, nonce: "c242fc71f8126d20", user: "__system", key: "36a3e9de2c895b871197634ea03778ce" }
m30999| Thu Jun 14 01:25:59 [Balancer] replica set monitor for replica set d1 started, address is d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:25:59 [initandlisten] connection accepted from 10.255.119.66:54758 #13 (11 connections now open)
m31100| Thu Jun 14 01:25:59 [conn13] authenticate db: local { authenticate: 1, nonce: "482e196c0de26f22", user: "__system", key: "6bf23a35b71b49f9f4bec1e13804735d" }
m30999| Thu Jun 14 01:25:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975e72fbdcaaf7b2c072e
m30999| Thu Jun 14 01:25:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31101| Thu Jun 14 01:26:00 [conn3] end connection 10.255.119.66:40397 (7 connections now open)
m31101| Thu Jun 14 01:26:00 [initandlisten] connection accepted from 10.255.119.66:46078 #9 (8 connections now open)
m31101| Thu Jun 14 01:26:00 [conn9] authenticate db: local { authenticate: 1, nonce: "dee97730fd643ba5", user: "__system", key: "bb3d10468d5561f46209e85265082d0d" }
m31000| Thu Jun 14 01:26:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975e844bfbb7b7d568222
m31000| Thu Jun 14 01:26:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:26:02 [conn3] end connection 10.255.119.66:47645 (10 connections now open)
m31100| Thu Jun 14 01:26:02 [initandlisten] connection accepted from 10.255.119.66:54760 #14 (11 connections now open)
m31100| Thu Jun 14 01:26:02 [conn14] authenticate db: local { authenticate: 1, nonce: "b634d5ec5aaf6b47", user: "__system", key: "2b182b8e3bbe1e394f5a3057e3cbc581" }
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:26:02Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 26,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 16,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:02Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 16,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:02Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31100, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status for : domU-12-31-39-01-70-B4:31101, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status : 3 target state : 2
Status for : domU-12-31-39-01-70-B4:31102, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
m31100| Thu Jun 14 01:26:02 [conn4] end connection 10.255.119.66:47646 (10 connections now open)
m31100| Thu Jun 14 01:26:02 [initandlisten] connection accepted from 10.255.119.66:54761 #15 (11 connections now open)
m31100| Thu Jun 14 01:26:02 [conn15] authenticate db: local { authenticate: 1, nonce: "4ff9a71ca6ce1735", user: "__system", key: "79f2412a370084e03b7f293882f0fc03" }
m31101| Thu Jun 14 01:26:05 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:26:05 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:05 [initandlisten] connection accepted from 10.255.119.66:54762 #16 (12 connections now open)
m31100| Thu Jun 14 01:26:05 [conn16] authenticate db: local { authenticate: 1, nonce: "b02c97ce2183bf10", user: "__system", key: "21905750497b66f939b189369db53ee0" }
m31101| Thu Jun 14 01:26:05 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:26:05 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:05 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:26:05 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:26:05 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:26:05 [rsSync] replSet initial sync cloning db: test
m31100| Thu Jun 14 01:26:05 [initandlisten] connection accepted from 10.255.119.66:54763 #17 (13 connections now open)
m31100| Thu Jun 14 01:26:05 [conn17] authenticate db: local { authenticate: 1, nonce: "1742658daefdb068", user: "__system", key: "486ac7003522806fd68689385abe6aae" }
m31101| Thu Jun 14 01:26:05 [FileAllocator] allocating new datafile /data/db/d1-1/test.ns, filling with zeroes...
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:26:06 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54764 #18 (14 connections now open)
m31100| Thu Jun 14 01:26:06 [conn18] authenticate db: local { authenticate: 1, nonce: "b1f36f8092eac3ab", user: "__system", key: "f5607eceec33cd252400f18a302efb2c" }
m31102| Thu Jun 14 01:26:06 [rsSync] build index local.me { _id: 1 }
m31102| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync drop all databases
m31102| Thu Jun 14 01:26:06 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync clone all databases
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning db: test
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54765 #19 (15 connections now open)
m31100| Thu Jun 14 01:26:06 [conn19] authenticate db: local { authenticate: 1, nonce: "a6d5df4a7585f13e", user: "__system", key: "5180359dea3140005f278df2c9efa341" }
m31102| Thu Jun 14 01:26:06 [FileAllocator] allocating new datafile /data/db/d1-2/test.ns, filling with zeroes...
m31102| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54709 #9 (9 connections now open)
m31102| Thu Jun 14 01:26:06 [conn9] authenticate db: local { authenticate: 1, nonce: "5b15528a1bd1ef21", user: "__system", key: "c8ea203daa8c5116dcd3c8f5b4a6b14f" }
m31101| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:46086 #10 (9 connections now open)
m31101| Thu Jun 14 01:26:06 [conn10] authenticate db: local { authenticate: 1, nonce: "f2a52a220a8cbf75", user: "__system", key: "177e873826f955b75cceafbcde9a1ad0" }
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54766 #20 (16 connections now open)
m31100| Thu Jun 14 01:26:06 [conn20] authenticate db: local { authenticate: 1, nonce: "dc0601f8a72bd5b8", user: "__system", key: "3aee99fcc1c84048f0d5057a24db0266" }
m31101| Thu Jun 14 01:26:06 [FileAllocator] done allocating datafile /data/db/d1-1/test.ns, size: 16MB, took 0.219 secs
m31101| Thu Jun 14 01:26:06 [FileAllocator] allocating new datafile /data/db/d1-1/test.0, filling with zeroes...
m31102| Thu Jun 14 01:26:06 [FileAllocator] done allocating datafile /data/db/d1-2/test.ns, size: 16MB, took 0.408 secs
m31102| Thu Jun 14 01:26:06 [FileAllocator] allocating new datafile /data/db/d1-2/test.0, filling with zeroes...
m31101| Thu Jun 14 01:26:06 [FileAllocator] done allocating datafile /data/db/d1-1/test.0, size: 16MB, took 0.517 secs
m31100| Thu Jun 14 01:26:06 [conn17] end connection 10.255.119.66:54763 (15 connections now open)
m31101| Thu Jun 14 01:26:06 [rsSync] build index test.foo { _id: 1 }
m31101| Thu Jun 14 01:26:06 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54769 #21 (16 connections now open)
m31100| Thu Jun 14 01:26:06 [conn21] authenticate db: local { authenticate: 1, nonce: "d426f22add03982a", user: "__system", key: "9ec060cdad9770af9a67365e9166328a" }
m31100| Thu Jun 14 01:26:06 [conn21] end connection 10.255.119.66:54769 (15 connections now open)
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:26:06 [rsSync] build index test.foo { x: 1.0 }
m31101| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning indexes for : test
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54770 #22 (16 connections now open)
m31100| Thu Jun 14 01:26:06 [conn22] authenticate db: local { authenticate: 1, nonce: "1158a95ac139a2cc", user: "__system", key: "596e8b5bc606d0589006933446a5a958" }
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:26:06 [conn22] end connection 10.255.119.66:54770 (15 connections now open)
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54771 #23 (16 connections now open)
m31100| Thu Jun 14 01:26:06 [conn23] authenticate db: local { authenticate: 1, nonce: "4816a7a7f4596035", user: "__system", key: "c3b569bb780145b4317126aa41a621a0" }
m31100| Thu Jun 14 01:26:06 [conn23] end connection 10.255.119.66:54771 (15 connections now open)
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync finishing up
m31101| Thu Jun 14 01:26:06 [rsSync] replSet set minValid=4fd975e4:1
m31101| Thu Jun 14 01:26:06 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:06 [rsSync] replSet initial sync done
m31102| Thu Jun 14 01:26:06 [FileAllocator] done allocating datafile /data/db/d1-2/test.0, size: 16MB, took 0.549 secs
m31102| Thu Jun 14 01:26:06 [rsSync] build index test.foo { _id: 1 }
m31102| Thu Jun 14 01:26:06 [rsSync] fastBuildIndex dupsToDrop:0
m31102| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:26:06 [conn16] end connection 10.255.119.66:54762 (14 connections now open)
m31100| Thu Jun 14 01:26:06 [conn19] end connection 10.255.119.66:54765 (13 connections now open)
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54772 #24 (14 connections now open)
m31100| Thu Jun 14 01:26:06 [conn24] authenticate db: local { authenticate: 1, nonce: "f828bec015464f75", user: "__system", key: "14fd24dc3ae6fa8e43e7e4c21f03dc47" }
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync data copy, starting syncup
m31102| Thu Jun 14 01:26:06 [rsSync] build index test.foo { x: 1.0 }
m31102| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync building indexes
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning indexes for : test
m31100| Thu Jun 14 01:26:06 [conn24] end connection 10.255.119.66:54772 (13 connections now open)
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54773 #25 (14 connections now open)
m31100| Thu Jun 14 01:26:06 [conn25] authenticate db: local { authenticate: 1, nonce: "f6830864b3e71962", user: "__system", key: "c8c720c6f5ed99888a4a56deaa40023d" }
m31100| Thu Jun 14 01:26:06 [conn25] end connection 10.255.119.66:54773 (13 connections now open)
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:26:06 [initandlisten] connection accepted from 10.255.119.66:54774 #26 (14 connections now open)
m31100| Thu Jun 14 01:26:06 [conn26] authenticate db: local { authenticate: 1, nonce: "aab2bc0ad65574f6", user: "__system", key: "3b71e16500210d299e7426afb6379508" }
m31100| Thu Jun 14 01:26:06 [conn26] end connection 10.255.119.66:54774 (13 connections now open)
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync query minValid
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync finishing up
m31102| Thu Jun 14 01:26:06 [rsSync] replSet set minValid=4fd975e4:1
m31102| Thu Jun 14 01:26:06 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Thu Jun 14 01:26:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:26:06 [conn18] end connection 10.255.119.66:54764 (12 connections now open)
m31102| Thu Jun 14 01:26:06 [rsSync] replSet initial sync done
m31101| Thu Jun 14 01:26:07 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:07 [initandlisten] connection accepted from 10.255.119.66:54775 #27 (13 connections now open)
m31100| Thu Jun 14 01:26:07 [conn27] authenticate db: local { authenticate: 1, nonce: "58560f51310ebb5d", user: "__system", key: "1411cddc3e777b058111cd489ff65d1f" }
m31101| Thu Jun 14 01:26:07 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:07 [initandlisten] connection accepted from 10.255.119.66:54776 #28 (14 connections now open)
m31100| Thu Jun 14 01:26:07 [conn28] authenticate db: local { authenticate: 1, nonce: "47ec6dd3e7dc7c00", user: "__system", key: "ee807ef5bfbb6fcaae8a96643184c6d7" }
m31102| Thu Jun 14 01:26:07 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:07 [initandlisten] connection accepted from 10.255.119.66:54777 #29 (15 connections now open)
m31100| Thu Jun 14 01:26:07 [conn29] authenticate db: local { authenticate: 1, nonce: "6d63d3c75bf1716e", user: "__system", key: "28dd1be0df2a84b3f1a24c826b0c7b8d" }
m31101| Thu Jun 14 01:26:07 [rsSync] replSet SECONDARY
m31102| Thu Jun 14 01:26:07 [rsSync] replSet SECONDARY
m31102| Thu Jun 14 01:26:07 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:26:07 [initandlisten] connection accepted from 10.255.119.66:54778 #30 (16 connections now open)
m31100| Thu Jun 14 01:26:07 [conn30] authenticate db: local { authenticate: 1, nonce: "5b1bd62b2a9ba1e0", user: "__system", key: "1d4f051a5b2c4ac221b4e2ee78898f29" }
m31100| Thu Jun 14 01:26:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31100| Thu Jun 14 01:26:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31101| Thu Jun 14 01:26:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:26:08Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 32,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31100, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status for : domU-12-31-39-01-70-B4:31101, checking domU-12-31-39-01-70-B4:31101/domU-12-31-39-01-70-B4:31101
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:26:08Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 32,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
}
],
"ok" : 1
}
ReplSetTest waitForIndicator state on connection to domU-12-31-39-01-70-B4:31102
[ 2 ]
ReplSetTest waitForIndicator from node connection to domU-12-31-39-01-70-B4:31102
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:26:08Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 32,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31100, checking domU-12-31-39-01-70-B4:31102/domU-12-31-39-01-70-B4:31102
Status for : domU-12-31-39-01-70-B4:31101, checking domU-12-31-39-01-70-B4:31102/domU-12-31-39-01-70-B4:31102
Status for : domU-12-31-39-01-70-B4:31102, checking domU-12-31-39-01-70-B4:31102/domU-12-31-39-01-70-B4:31102
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d1",
"date" : ISODate("2012-06-14T05:26:08Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 32,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 22,
"optime" : Timestamp(1339651556000, 1),
"optimeDate" : ISODate("2012-06-14T05:25:56Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:08Z"),
"pingMs" : 0
}
],
"ok" : 1
}
{
"user" : "bar",
"readOnly" : false,
"pwd" : "131d1786e1320446336c3943bfc7ba1c",
"_id" : ObjectId("4fd975f06f2560a998175b70")
}
m31100| Thu Jun 14 01:26:08 [conn9] build index test.system.users { _id: 1 }
m31100| Thu Jun 14 01:26:08 [conn9] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:08 [rsSync] build index test.system.users { _id: 1 }
m31102| Thu Jun 14 01:26:08 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:08 [rsSync] build index test.system.users { _id: 1 }
m31101| Thu Jun 14 01:26:08 [rsSync] build index done. scanned 0 total records. 0 secs
{
"user" : "sad",
"readOnly" : true,
"pwd" : "b874a27b7105ec1cfd1f26a5f7d27eca",
"_id" : ObjectId("4fd975f06f2560a998175b71")
}
query try
m31000| Thu Jun 14 01:26:08 [conn] couldn't find database [foo] in config db
m31000| Thu Jun 14 01:26:08 [conn] put [foo] on: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
"error { \"$err\" : \"unauthorized for db:foo level: 1\", \"code\" : 15845 }"
cmd try
"error { \"$err\" : \"unrecognized command: listdbs\", \"code\" : 13390 }"
insert try 1
m31000| Thu Jun 14 01:26:08 [conn] authenticate db: test { authenticate: 1.0, user: "bar", nonce: "3c412c406334c501", key: "f5f4d80aab9bfb2153c856d867a29a2a" }
m31100| Thu Jun 14 01:26:08 [initandlisten] connection accepted from 10.255.119.66:54779 #31 (17 connections now open)
m31100| Thu Jun 14 01:26:08 [conn31] authenticate db: local { authenticate: 1, nonce: "ea48fb9ce437dbd0", user: "__system", key: "bef1937d207158a93179b200d8aa7ccc" }
m31100| Thu Jun 14 01:26:08 [initandlisten] connection accepted from 10.255.119.66:54780 #32 (18 connections now open)
m31100| Thu Jun 14 01:26:08 [conn32] authenticate db: local { authenticate: 1, nonce: "6b9abacdbfb49397", user: "__system", key: "e5cc194b718de0eb0301cdc6a52f6224" }
{ "dbname" : "test", "user" : "bar", "readOnly" : false, "ok" : 1 }
insert try 2
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-0'
Thu Jun 14 01:26:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31200 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Thu Jun 14 01:26:08
m31200| Thu Jun 14 01:26:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Thu Jun 14 01:26:08
m31200| Thu Jun 14 01:26:08 [initandlisten] MongoDB starting : pid=21832 port=31200 dbpath=/data/db/d2-0 32-bit host=domU-12-31-39-01-70-B4
m31200| Thu Jun 14 01:26:08 [initandlisten]
m31200| Thu Jun 14 01:26:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Thu Jun 14 01:26:08 [initandlisten] ** Not recommended for production.
m31200| Thu Jun 14 01:26:08 [initandlisten]
m31200| Thu Jun 14 01:26:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Thu Jun 14 01:26:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Thu Jun 14 01:26:08 [initandlisten] ** with --journal, the limit is lower
m31200| Thu Jun 14 01:26:08 [initandlisten]
m31200| Thu Jun 14 01:26:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Thu Jun 14 01:26:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Thu Jun 14 01:26:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31200| Thu Jun 14 01:26:08 [initandlisten] options: { dbpath: "/data/db/d2-0", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31200, replSet: "d2", rest: true, smallfiles: true }
m31200| Thu Jun 14 01:26:08 [initandlisten] waiting for connections on port 31200
m31200| Thu Jun 14 01:26:08 [websvr] admin web console waiting for connections on port 32200
m31200| Thu Jun 14 01:26:08 [initandlisten] connection accepted from 10.255.119.66:34894 #1 (1 connection now open)
m31200| Thu Jun 14 01:26:08 [conn1] authenticate db: local { authenticate: 1, nonce: "1c967760a2705d2d", user: "__system", key: "febacd8fb6ead84fc337788bde19b56e" }
m31200| Thu Jun 14 01:26:08 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Thu Jun 14 01:26:08 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Thu Jun 14 01:26:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m29000| Thu Jun 14 01:26:08 [clientcursormon] mem (MB) res:49 virt:183 mapped:64
[ connection to domU-12-31-39-01-70-B4:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-1'
m31200| Thu Jun 14 01:26:08 [initandlisten] connection accepted from 127.0.0.1:49449 #2 (2 connections now open)
m31200| Thu Jun 14 01:26:08 [conn2] note: no users configured in admin.system.users, allowing localhost access
Thu Jun 14 01:26:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31201 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Thu Jun 14 01:26:08
m31201| Thu Jun 14 01:26:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Thu Jun 14 01:26:08
m31201| Thu Jun 14 01:26:08 [initandlisten] MongoDB starting : pid=21848 port=31201 dbpath=/data/db/d2-1 32-bit host=domU-12-31-39-01-70-B4
m31201| Thu Jun 14 01:26:08 [initandlisten]
m31201| Thu Jun 14 01:26:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Thu Jun 14 01:26:08 [initandlisten] ** Not recommended for production.
m31201| Thu Jun 14 01:26:08 [initandlisten]
m31201| Thu Jun 14 01:26:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Thu Jun 14 01:26:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Thu Jun 14 01:26:08 [initandlisten] ** with --journal, the limit is lower
m31201| Thu Jun 14 01:26:08 [initandlisten]
m31201| Thu Jun 14 01:26:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Thu Jun 14 01:26:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Thu Jun 14 01:26:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31201| Thu Jun 14 01:26:08 [initandlisten] options: { dbpath: "/data/db/d2-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "d2", rest: true, smallfiles: true }
m31201| Thu Jun 14 01:26:08 [initandlisten] waiting for connections on port 31201
m31201| Thu Jun 14 01:26:08 [websvr] admin web console waiting for connections on port 32201
m31201| Thu Jun 14 01:26:08 [initandlisten] connection accepted from 10.255.119.66:47283 #1 (1 connection now open)
m31201| Thu Jun 14 01:26:08 [conn1] authenticate db: local { authenticate: 1, nonce: "e9cf99926d1a34e3", user: "__system", key: "89a17c82761d95669fb2a6f51009110f" }
m31201| Thu Jun 14 01:26:08 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Thu Jun 14 01:26:08 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31200,
connection to domU-12-31-39-01-70-B4:31201
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-2'
m31201| Thu Jun 14 01:26:09 [initandlisten] connection accepted from 127.0.0.1:49285 #2 (2 connections now open)
m31201| Thu Jun 14 01:26:09 [conn2] note: no users configured in admin.system.users, allowing localhost access
Thu Jun 14 01:26:09 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31202 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Thu Jun 14 01:26:09
m31202| Thu Jun 14 01:26:09 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Thu Jun 14 01:26:09
m31202| Thu Jun 14 01:26:09 [initandlisten] MongoDB starting : pid=21864 port=31202 dbpath=/data/db/d2-2 32-bit host=domU-12-31-39-01-70-B4
m31202| Thu Jun 14 01:26:09 [initandlisten]
m31202| Thu Jun 14 01:26:09 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Thu Jun 14 01:26:09 [initandlisten] ** Not recommended for production.
m31202| Thu Jun 14 01:26:09 [initandlisten]
m31202| Thu Jun 14 01:26:09 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Thu Jun 14 01:26:09 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Thu Jun 14 01:26:09 [initandlisten] ** with --journal, the limit is lower
m31202| Thu Jun 14 01:26:09 [initandlisten]
m31202| Thu Jun 14 01:26:09 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Thu Jun 14 01:26:09 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Thu Jun 14 01:26:09 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31202| Thu Jun 14 01:26:09 [initandlisten] options: { dbpath: "/data/db/d2-2", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31202, replSet: "d2", rest: true, smallfiles: true }
m31202| Thu Jun 14 01:26:09 [initandlisten] waiting for connections on port 31202
m31202| Thu Jun 14 01:26:09 [websvr] admin web console waiting for connections on port 32202
m31202| Thu Jun 14 01:26:09 [initandlisten] connection accepted from 10.255.119.66:58033 #1 (1 connection now open)
m31202| Thu Jun 14 01:26:09 [conn1] authenticate db: local { authenticate: 1, nonce: "406a0e0ae78aa70a", user: "__system", key: "d8d85509d8e1461eeaa2155328b2acfc" }
m31202| Thu Jun 14 01:26:09 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Thu Jun 14 01:26:09 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31200,
connection to domU-12-31-39-01-70-B4:31201,
connection to domU-12-31-39-01-70-B4:31202
]
{
"replSetInitiate" : {
"_id" : "d2",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31200"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31201"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31202"
}
]
}
}
m31202| Thu Jun 14 01:26:09 [initandlisten] connection accepted from 127.0.0.1:38030 #2 (2 connections now open)
m31202| Thu Jun 14 01:26:09 [conn2] note: no users configured in admin.system.users, allowing localhost access
m31200| Thu Jun 14 01:26:09 [conn2] replSet replSetInitiate admin command received from client
m31200| Thu Jun 14 01:26:09 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Thu Jun 14 01:26:09 [initandlisten] connection accepted from 10.255.119.66:47288 #3 (3 connections now open)
m31201| Thu Jun 14 01:26:09 [conn3] authenticate db: local { authenticate: 1, nonce: "6b70e24bb7572efb", user: "__system", key: "77cf1be9b6ef12738e3e38e49819c752" }
m31202| Thu Jun 14 01:26:09 [initandlisten] connection accepted from 10.255.119.66:58036 #3 (3 connections now open)
m31202| Thu Jun 14 01:26:09 [conn3] authenticate db: local { authenticate: 1, nonce: "dd3aec87d6d434a6", user: "__system", key: "4ef63caa82d766987f779f0b9b7456f1" }
m31200| Thu Jun 14 01:26:09 [conn2] replSet replSetInitiate all members seem up
m31200| Thu Jun 14 01:26:09 [conn2] ******
m31200| Thu Jun 14 01:26:09 [conn2] creating replication oplog of size: 40MB...
m31200| Thu Jun 14 01:26:09 [FileAllocator] allocating new datafile /data/db/d2-0/local.ns, filling with zeroes...
m31200| Thu Jun 14 01:26:09 [FileAllocator] creating directory /data/db/d2-0/_tmp
m30999| Thu Jun 14 01:26:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975f12fbdcaaf7b2c072f
m30999| Thu Jun 14 01:26:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31200| Thu Jun 14 01:26:09 [FileAllocator] done allocating datafile /data/db/d2-0/local.ns, size: 16MB, took 0.255 secs
m31200| Thu Jun 14 01:26:09 [FileAllocator] allocating new datafile /data/db/d2-0/local.0, filling with zeroes...
m31000| Thu Jun 14 01:26:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975f244bfbb7b7d568223
m31000| Thu Jun 14 01:26:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31200| Thu Jun 14 01:26:11 [FileAllocator] done allocating datafile /data/db/d2-0/local.0, size: 64MB, took 1.801 secs
m31200| Thu Jun 14 01:26:11 [conn2] ******
m31200| Thu Jun 14 01:26:11 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Thu Jun 14 01:26:11 [conn2] replSet saveConfigLocally done
m31200| Thu Jun 14 01:26:11 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Thu Jun 14 01:26:11 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d2", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31202" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:2115427 r:53 w:35 reslen:112 2116ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31102| Thu Jun 14 01:26:14 [conn3] end connection 10.255.119.66:45873 (8 connections now open)
m31102| Thu Jun 14 01:26:14 [initandlisten] connection accepted from 10.255.119.66:54733 #10 (9 connections now open)
m31102| Thu Jun 14 01:26:14 [conn10] authenticate db: local { authenticate: 1, nonce: "540c495194ab883e", user: "__system", key: "a514117e025f8fcffcb616b953c04b58" }
m31102| Thu Jun 14 01:26:16 [conn4] end connection 10.255.119.66:45876 (8 connections now open)
m31102| Thu Jun 14 01:26:16 [initandlisten] connection accepted from 10.255.119.66:54734 #11 (9 connections now open)
m31102| Thu Jun 14 01:26:16 [conn11] authenticate db: local { authenticate: 1, nonce: "7d5b3cc141a93680", user: "__system", key: "2ef778730df33617b8a09873ea48561c" }
m31101| Thu Jun 14 01:26:16 [conn4] end connection 10.255.119.66:40402 (8 connections now open)
m31101| Thu Jun 14 01:26:16 [initandlisten] connection accepted from 10.255.119.66:46113 #11 (9 connections now open)
m31101| Thu Jun 14 01:26:16 [conn11] authenticate db: local { authenticate: 1, nonce: "35a96442517d51e8", user: "__system", key: "faa5962197658c485774ae510af7be04" }
m31200| Thu Jun 14 01:26:18 [rsStart] replSet I am domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:18 [rsStart] replSet STARTUP2
m31200| Thu Jun 14 01:26:18 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31200| Thu Jun 14 01:26:18 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31200| Thu Jun 14 01:26:18 [rsSync] replSet SECONDARY
m31201| Thu Jun 14 01:26:18 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:18 [initandlisten] connection accepted from 10.255.119.66:34907 #3 (3 connections now open)
m31200| Thu Jun 14 01:26:18 [conn3] authenticate db: local { authenticate: 1, nonce: "a4007b4d28f1fb8f", user: "__system", key: "748d9286e66fb31de43af2562b6dbb5b" }
m31201| Thu Jun 14 01:26:18 [rsStart] replSet I am domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:26:18 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Thu Jun 14 01:26:18 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Thu Jun 14 01:26:18 [FileAllocator] allocating new datafile /data/db/d2-1/local.ns, filling with zeroes...
m31201| Thu Jun 14 01:26:18 [FileAllocator] creating directory /data/db/d2-1/_tmp
m31202| Thu Jun 14 01:26:19 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:19 [initandlisten] connection accepted from 10.255.119.66:34908 #4 (4 connections now open)
m31200| Thu Jun 14 01:26:19 [conn4] authenticate db: local { authenticate: 1, nonce: "9bffb94ee67ac01d", user: "__system", key: "df84f8537d69077b3ee5507bd254c24b" }
m31202| Thu Jun 14 01:26:19 [rsStart] replSet I am domU-12-31-39-01-70-B4:31202
m31202| Thu Jun 14 01:26:19 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Thu Jun 14 01:26:19 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Thu Jun 14 01:26:19 [FileAllocator] allocating new datafile /data/db/d2-2/local.ns, filling with zeroes...
m31202| Thu Jun 14 01:26:19 [FileAllocator] creating directory /data/db/d2-2/_tmp
m31201| Thu Jun 14 01:26:19 [FileAllocator] done allocating datafile /data/db/d2-1/local.ns, size: 16MB, took 0.271 secs
m31201| Thu Jun 14 01:26:19 [FileAllocator] allocating new datafile /data/db/d2-1/local.0, filling with zeroes...
m30999| Thu Jun 14 01:26:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd975fb2fbdcaaf7b2c0730
m30999| Thu Jun 14 01:26:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31201| Thu Jun 14 01:26:19 [FileAllocator] done allocating datafile /data/db/d2-1/local.0, size: 16MB, took 0.692 secs
m31202| Thu Jun 14 01:26:19 [FileAllocator] done allocating datafile /data/db/d2-2/local.ns, size: 16MB, took 0.7 secs
m31202| Thu Jun 14 01:26:19 [FileAllocator] allocating new datafile /data/db/d2-2/local.0, filling with zeroes...
m31201| Thu Jun 14 01:26:20 [rsStart] replSet saveConfigLocally done
m31201| Thu Jun 14 01:26:20 [rsStart] replSet STARTUP2
m31202| Thu Jun 14 01:26:20 [FileAllocator] done allocating datafile /data/db/d2-2/local.0, size: 16MB, took 0.325 secs
m31201| Thu Jun 14 01:26:20 [rsSync] ******
m31201| Thu Jun 14 01:26:20 [rsSync] creating replication oplog of size: 40MB...
m31201| Thu Jun 14 01:26:20 [FileAllocator] allocating new datafile /data/db/d2-1/local.1, filling with zeroes...
m31202| Thu Jun 14 01:26:20 [rsStart] replSet saveConfigLocally done
m31202| Thu Jun 14 01:26:20 [rsStart] replSet STARTUP2
m31202| Thu Jun 14 01:26:20 [rsSync] ******
m31202| Thu Jun 14 01:26:20 [rsSync] creating replication oplog of size: 40MB...
m31202| Thu Jun 14 01:26:20 [FileAllocator] allocating new datafile /data/db/d2-2/local.1, filling with zeroes...
m31200| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31200| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m31200| Thu Jun 14 01:26:20 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31202 would veto
m31200| Thu Jun 14 01:26:20 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31202 would veto
m31000| Thu Jun 14 01:26:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd975fc44bfbb7b7d568224
m31000| Thu Jun 14 01:26:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31201| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31201| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31202| Thu Jun 14 01:26:20 [initandlisten] connection accepted from 10.255.119.66:58042 #4 (4 connections now open)
m31202| Thu Jun 14 01:26:20 [conn4] authenticate db: local { authenticate: 1, nonce: "f3e90ade1372f45e", user: "__system", key: "5b5d71999d6d28f4f85f1ba5d3601da6" }
m31201| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is up
m31201| Thu Jun 14 01:26:20 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state STARTUP2
m31202| Thu Jun 14 01:26:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31202| Thu Jun 14 01:26:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31201| Thu Jun 14 01:26:21 [initandlisten] connection accepted from 10.255.119.66:47296 #4 (4 connections now open)
m31201| Thu Jun 14 01:26:21 [conn4] authenticate db: local { authenticate: 1, nonce: "c191bbad7d3844c0", user: "__system", key: "89725515b008321338390b603fba8925" }
m31202| Thu Jun 14 01:26:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31202| Thu Jun 14 01:26:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m31202| Thu Jun 14 01:26:22 [FileAllocator] done allocating datafile /data/db/d2-2/local.1, size: 64MB, took 2.235 secs
m31201| Thu Jun 14 01:26:22 [FileAllocator] done allocating datafile /data/db/d2-1/local.1, size: 64MB, took 2.388 secs
m31201| Thu Jun 14 01:26:22 [rsSync] ******
m31201| Thu Jun 14 01:26:22 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:26:22 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31202| Thu Jun 14 01:26:22 [rsSync] ******
m31202| Thu Jun 14 01:26:22 [rsSync] replSet initial sync pending
m31202| Thu Jun 14 01:26:22 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31200| Thu Jun 14 01:26:26 [rsMgr] replSet info electSelf 0
m31202| Thu Jun 14 01:26:26 [conn3] replSet RECOVERING
m31202| Thu Jun 14 01:26:26 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31201| Thu Jun 14 01:26:26 [conn3] replSet RECOVERING
m31201| Thu Jun 14 01:26:26 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31200| Thu Jun 14 01:26:26 [rsMgr] replSet PRIMARY
m31201| Thu Jun 14 01:26:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state RECOVERING
m31201| Thu Jun 14 01:26:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31202| Thu Jun 14 01:26:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31202| Thu Jun 14 01:26:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
adding shard d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31000| Thu Jun 14 01:26:28 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "794b75dec1950f46", key: "b1fb961bd65cc8f0a18bbf17dc743109" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
logged in
m31000| Thu Jun 14 01:26:28 [conn] starting new replica set monitor for replica set d2 with seed of domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31200| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:34911 #5 (5 connections now open)
m31000| Thu Jun 14 01:26:28 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set d2
m31000| Thu Jun 14 01:26:28 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31200", 1: "domU-12-31-39-01-70-B4:31202", 2: "domU-12-31-39-01-70-B4:31201" } from d2/
m31000| Thu Jun 14 01:26:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set d2
m31200| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:34912 #6 (6 connections now open)
m31000| Thu Jun 14 01:26:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set d2
m31000| Thu Jun 14 01:26:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31201 to replica set d2
m31201| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:47299 #5 (5 connections now open)
m31000| Thu Jun 14 01:26:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31201 in replica set d2
m31000| Thu Jun 14 01:26:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31202 to replica set d2
m31000| Thu Jun 14 01:26:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31202 in replica set d2
m31202| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:58047 #5 (5 connections now open)
m31200| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:34915 #7 (7 connections now open)
m31000| Thu Jun 14 01:26:28 [conn] Primary for replica set d2 changed to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:28 [conn7] authenticate db: local { authenticate: 1, nonce: "b10036ae212fe1", user: "__system", key: "c86bca0ed8cfcdab5540a603b553b103" }
m31200| Thu Jun 14 01:26:28 [conn5] end connection 10.255.119.66:34911 (6 connections now open)
m31201| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:47302 #6 (6 connections now open)
m31201| Thu Jun 14 01:26:28 [conn6] authenticate db: local { authenticate: 1, nonce: "fad4da9dab676e6", user: "__system", key: "c2b4998cd55a65b390b6e10b34c54cab" }
m31202| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:58050 #6 (6 connections now open)
m31202| Thu Jun 14 01:26:28 [conn6] authenticate db: local { authenticate: 1, nonce: "1fc9028fc57c9611", user: "__system", key: "634bc771b7f2636daddb21001d97d682" }
m31000| Thu Jun 14 01:26:28 [conn] replica set monitor for replica set d2 started, address is d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31200| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:34918 #8 (7 connections now open)
m31200| Thu Jun 14 01:26:28 [conn8] authenticate db: local { authenticate: 1, nonce: "6b009daaa634fcac", user: "__system", key: "0aabb37d6482fd202876a1523b473c9a" }
m31000| Thu Jun 14 01:26:28 [conn] going to add shard: { _id: "d2", host: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202" }
m31100| Thu Jun 14 01:26:28 [conn8] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:28 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m29000| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:51894 #13 (13 connections now open)
m29000| Thu Jun 14 01:26:28 [conn13] authenticate db: local { authenticate: 1, nonce: "d80c57cefaeb4145", user: "__system", key: "1ffb6ea7542450f22c8d903b274960bb" }
m31100| Thu Jun 14 01:26:28 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 0.0 } ], shardId: "test.foo-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:28 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:28 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976040bf7ca455ce4e479
m31100| Thu Jun 14 01:26:28 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:31100:1339651588:969901886 (sleeping for 30000ms)
m29000| Thu Jun 14 01:26:28 [initandlisten] connection accepted from 10.255.119.66:51895 #14 (14 connections now open)
m31100| Thu Jun 14 01:26:28 [conn8] splitChunk accepted at version 1|0||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:28 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:28-0", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651588737), what: "split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m29000| Thu Jun 14 01:26:28 [conn14] authenticate db: local { authenticate: 1, nonce: "90487dc61874418c", user: "__system", key: "cb7c2fa47e909479e700bcb2f28419ad" }
m31100| Thu Jun 14 01:26:28 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31000| Thu Jun 14 01:26:28 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd975e444bfbb7b7d568221 based on: 1|0||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:28 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921)
m31200| Thu Jun 14 01:26:28 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state RECOVERING
m31200| Thu Jun 14 01:26:28 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state RECOVERING
m30999| Thu Jun 14 01:26:29 [Balancer] starting new replica set monitor for replica set d2 with seed of domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m30999| Thu Jun 14 01:26:29 [Balancer] successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set d2
m31200| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:34921 #9 (8 connections now open)
m30999| Thu Jun 14 01:26:29 [Balancer] changing hosts to { 0: "domU-12-31-39-01-70-B4:31200", 1: "domU-12-31-39-01-70-B4:31202", 2: "domU-12-31-39-01-70-B4:31201" } from d2/
m30999| Thu Jun 14 01:26:29 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set d2
m30999| Thu Jun 14 01:26:29 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set d2
m30999| Thu Jun 14 01:26:29 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31201 to replica set d2
m31200| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:34922 #10 (9 connections now open)
m31201| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:47309 #7 (7 connections now open)
m30999| Thu Jun 14 01:26:29 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31201 in replica set d2
m30999| Thu Jun 14 01:26:29 [Balancer] trying to add new host domU-12-31-39-01-70-B4:31202 to replica set d2
m30999| Thu Jun 14 01:26:29 [Balancer] successfully connected to new host domU-12-31-39-01-70-B4:31202 in replica set d2
m31202| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:58057 #7 (7 connections now open)
m31200| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:34925 #11 (10 connections now open)
m31200| Thu Jun 14 01:26:29 [conn11] authenticate db: local { authenticate: 1, nonce: "20e8a00f6e59d8fb", user: "__system", key: "da28d3a4ca0549bedc4e2de6edcd7f18" }
m31200| Thu Jun 14 01:26:29 [conn9] end connection 10.255.119.66:34921 (9 connections now open)
m30999| Thu Jun 14 01:26:29 [Balancer] Primary for replica set d2 changed to domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:47312 #8 (8 connections now open)
m31201| Thu Jun 14 01:26:29 [conn8] authenticate db: local { authenticate: 1, nonce: "ec34add1a338709b", user: "__system", key: "6262e9f66b58868e5db13b94a24164fa" }
m31202| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:58060 #8 (8 connections now open)
m31202| Thu Jun 14 01:26:29 [conn8] authenticate db: local { authenticate: 1, nonce: "b9ad856142ac800b", user: "__system", key: "534cdd2fc1eb2be324695011a6c3a162" }
m30999| Thu Jun 14 01:26:29 [Balancer] replica set monitor for replica set d2 started, address is d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31200| Thu Jun 14 01:26:29 [initandlisten] connection accepted from 10.255.119.66:34928 #12 (10 connections now open)
m31200| Thu Jun 14 01:26:29 [conn12] authenticate db: local { authenticate: 1, nonce: "59943bcd51a7f4de", user: "__system", key: "ba1d1854cf396afac03cdde6c344fdcd" }
m30999| Thu Jun 14 01:26:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd976052fbdcaaf7b2c0731
m30999| Thu Jun 14 01:26:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:26:29 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:26:29 [Balancer] d2 maxSize: 0 currSize: 80 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:26:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:26:29 [Balancer] d1
m30999| Thu Jun 14 01:26:29 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Thu Jun 14 01:26:29 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Thu Jun 14 01:26:29 [Balancer] d2
m30999| Thu Jun 14 01:26:29 [Balancer] ----
m30999| Thu Jun 14 01:26:29 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Thu Jun 14 01:26:29 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|2||4fd975e444bfbb7b7d568221 based on: (empty)
m30999| Thu Jun 14 01:26:29 [Balancer] Assertion: 10320:BSONElement: bad type -40
m30999| 0x84f514a 0x8126495 0x83f3537 0x811ddd3 0x835a42a 0x82c3073 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x2bf542 0x984b6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEv+0x1b3) [0x811ddd3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo12ChunkManager9findChunkERKNS_7BSONObjE+0x18a) [0x835a42a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x613) [0x82c3073]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x2bf542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x984b6e]
m30999| Thu Jun 14 01:26:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m30999| Thu Jun 14 01:26:29 [Balancer] scoped connection to domU-12-31-39-01-70-B4:29000 not being returned to the pool
m30999| Thu Jun 14 01:26:29 [Balancer] caught exception while doing balance: BSONElement: bad type -40
m29000| Thu Jun 14 01:26:29 [conn6] end connection 10.255.119.66:36519 (13 connections now open)
m31100| Thu Jun 14 01:26:30 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 9 locks(micros) r:18427 nreturned:2177 reslen:37029 112ms
m31100| Thu Jun 14 01:26:30 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 15 locks(micros) r:26784 nreturned:2575 reslen:43795 169ms
m31100| Thu Jun 14 01:26:30 [conn8] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:30 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:30 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 5850.0 } ], shardId: "test.foo-x_0.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:30 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:30 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976060bf7ca455ce4e47a
m31100| Thu Jun 14 01:26:30 [conn8] splitChunk accepted at version 1|2||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:30 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:30-1", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651590165), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 5850.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 5850.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:30 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31000| Thu Jun 14 01:26:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd975e444bfbb7b7d568221 based on: 1|2||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:30 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } on: { x: 5850.0 } (splitThreshold 471859) (migrate suggested)
m31000| Thu Jun 14 01:26:30 [conn] moving chunk (auto): ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|4||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey } to: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31000| Thu Jun 14 01:26:30 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|4||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey }) d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 -> d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31100| Thu Jun 14 01:26:30 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: 5850.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_5850.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:30 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:30 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976060bf7ca455ce4e47b
m31100| Thu Jun 14 01:26:30 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:30-2", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651590169), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, from: "d1", to: "d2" } }
m31100| Thu Jun 14 01:26:30 [conn8] moveChunk request accepted at version 1|4||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:30 [conn8] moveChunk number of documents: 1
m31100| Thu Jun 14 01:26:30 [conn8] starting new replica set monitor for replica set d2 with seed of domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31100| Thu Jun 14 01:26:30 [conn8] successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set d2
m31200| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:34929 #13 (11 connections now open)
m31100| Thu Jun 14 01:26:30 [conn8] changing hosts to { 0: "domU-12-31-39-01-70-B4:31200", 1: "domU-12-31-39-01-70-B4:31202", 2: "domU-12-31-39-01-70-B4:31201" } from d2/
m31100| Thu Jun 14 01:26:30 [conn8] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set d2
m31100| Thu Jun 14 01:26:30 [conn8] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set d2
m31100| Thu Jun 14 01:26:30 [conn8] trying to add new host domU-12-31-39-01-70-B4:31201 to replica set d2
m31200| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:34930 #14 (12 connections now open)
m31201| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:47317 #9 (9 connections now open)
m31100| Thu Jun 14 01:26:30 [conn8] successfully connected to new host domU-12-31-39-01-70-B4:31201 in replica set d2
m31100| Thu Jun 14 01:26:30 [conn8] trying to add new host domU-12-31-39-01-70-B4:31202 to replica set d2
m31100| Thu Jun 14 01:26:30 [conn8] successfully connected to new host domU-12-31-39-01-70-B4:31202 in replica set d2
m31202| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:58065 #9 (9 connections now open)
m31200| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:34933 #15 (13 connections now open)
m31200| Thu Jun 14 01:26:30 [conn15] authenticate db: local { authenticate: 1, nonce: "32ffd4d15c7f9527", user: "__system", key: "51eeabbaad9668d199563ed394a971d4" }
m31200| Thu Jun 14 01:26:30 [conn13] end connection 10.255.119.66:34929 (12 connections now open)
m31100| Thu Jun 14 01:26:30 [conn8] Primary for replica set d2 changed to domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:47320 #10 (10 connections now open)
m31201| Thu Jun 14 01:26:30 [conn10] authenticate db: local { authenticate: 1, nonce: "d3976fb202b5e6ab", user: "__system", key: "583ffb02a9e4111af849c25f1021a31e" }
m31202| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:58068 #10 (10 connections now open)
m31202| Thu Jun 14 01:26:30 [conn10] authenticate db: local { authenticate: 1, nonce: "fe3f3f3f986633c2", user: "__system", key: "5a12330ff5bfe639a54e9182533aa400" }
m31100| Thu Jun 14 01:26:30 [conn8] replica set monitor for replica set d2 started, address is d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31100| Thu Jun 14 01:26:30 [ReplicaSetMonitorWatcher] starting
m31200| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:34936 #16 (13 connections now open)
m31200| Thu Jun 14 01:26:30 [conn16] authenticate db: local { authenticate: 1, nonce: "f989adfbc2fce01f", user: "__system", key: "03c10b7d8774f9e6f90090a8c0d9862b" }
m31200| Thu Jun 14 01:26:30 [migrateThread] starting new replica set monitor for replica set d1 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31200| Thu Jun 14 01:26:30 [migrateThread] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set d1
m31100| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54825 #33 (19 connections now open)
m31200| Thu Jun 14 01:26:30 [migrateThread] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from d1/
m31200| Thu Jun 14 01:26:30 [migrateThread] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set d1
m31200| Thu Jun 14 01:26:30 [migrateThread] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set d1
m31200| Thu Jun 14 01:26:30 [migrateThread] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set d1
m31100| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54826 #34 (20 connections now open)
m31200| Thu Jun 14 01:26:30 [migrateThread] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set d1
m31200| Thu Jun 14 01:26:30 [migrateThread] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set d1
m31101| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:46146 #12 (10 connections now open)
m31102| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54769 #12 (10 connections now open)
m31200| Thu Jun 14 01:26:30 [migrateThread] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set d1
m31100| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54829 #35 (21 connections now open)
m31100| Thu Jun 14 01:26:30 [conn35] authenticate db: local { authenticate: 1, nonce: "33671668cc590787", user: "__system", key: "e72de532402b94ea2e8d1ad3c5209ce1" }
m31100| Thu Jun 14 01:26:30 [conn33] end connection 10.255.119.66:54825 (20 connections now open)
m31200| Thu Jun 14 01:26:30 [migrateThread] Primary for replica set d1 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:46149 #13 (11 connections now open)
m31101| Thu Jun 14 01:26:30 [conn13] authenticate db: local { authenticate: 1, nonce: "257d0d566d77b4c5", user: "__system", key: "dfd97e465c5296f5a1a7b6e69e1b2d63" }
m31102| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54772 #13 (11 connections now open)
m31102| Thu Jun 14 01:26:30 [conn13] authenticate db: local { authenticate: 1, nonce: "7e95581240918808", user: "__system", key: "a2c813bd2a7fe3e37a3ed274968efe3d" }
m31200| Thu Jun 14 01:26:30 [migrateThread] replica set monitor for replica set d1 started, address is d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31200| Thu Jun 14 01:26:30 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54832 #36 (21 connections now open)
m31100| Thu Jun 14 01:26:30 [conn36] authenticate db: local { authenticate: 1, nonce: "abb428f2aa993260", user: "__system", key: "5e84ad7ec0cb14a41d56f033ecc52f1d" }
m31200| Thu Jun 14 01:26:30 [FileAllocator] allocating new datafile /data/db/d2-0/test.ns, filling with zeroes...
m31101| Thu Jun 14 01:26:30 [conn9] end connection 10.255.119.66:46078 (10 connections now open)
m31101| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:46152 #14 (11 connections now open)
m31101| Thu Jun 14 01:26:30 [conn14] authenticate db: local { authenticate: 1, nonce: "829cc53a9785bab", user: "__system", key: "a8ddb99677b626f6c9c198ec19a1ea23" }
m31200| Thu Jun 14 01:26:30 [FileAllocator] done allocating datafile /data/db/d2-0/test.ns, size: 16MB, took 0.238 secs
m31200| Thu Jun 14 01:26:30 [FileAllocator] allocating new datafile /data/db/d2-0/test.0, filling with zeroes...
m31200| Thu Jun 14 01:26:30 [FileAllocator] done allocating datafile /data/db/d2-0/test.0, size: 16MB, took 0.261 secs
m31200| Thu Jun 14 01:26:30 [migrateThread] build index test.foo { _id: 1 }
m31200| Thu Jun 14 01:26:30 [migrateThread] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:26:30 [migrateThread] info: creating collection test.foo on add index
m31200| Thu Jun 14 01:26:30 [migrateThread] build index test.foo { x: 1.0 }
m31200| Thu Jun 14 01:26:30 [migrateThread] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:26:30 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 5850.0 } -> { x: MaxKey }
m31100| Thu Jun 14 01:26:30 [initandlisten] connection accepted from 10.255.119.66:54834 #37 (22 connections now open)
m31100| Thu Jun 14 01:26:30 [conn37] authenticate db: local { authenticate: 1, nonce: "f6e602c868a0e19c", user: "__system", key: "23ea3d0031fec84ee074b3b64b7b8e8d" }
m31000| Thu Jun 14 01:26:30 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd9760644bfbb7b7d568225
m31000| Thu Jun 14 01:26:30 [Balancer] ---- ShardInfoMap
m31000| Thu Jun 14 01:26:30 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Thu Jun 14 01:26:30 [Balancer] d2 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Thu Jun 14 01:26:30 [Balancer] ---- ShardToChunksMap
m31000| Thu Jun 14 01:26:30 [Balancer] d1
m31000| Thu Jun 14 01:26:30 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Thu Jun 14 01:26:30 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Thu Jun 14 01:26:30 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: MaxKey }, shard: "d1" }
m31000| Thu Jun 14 01:26:30 [Balancer] d2
m31000| Thu Jun 14 01:26:30 [Balancer] ----
m31000| Thu Jun 14 01:26:30 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Thu Jun 14 01:26:30 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|1||000000000000000000000000 min: { x: MinKey } max: { x: 0.0 }) d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 -> d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31100| Thu Jun 14 01:26:30 [conn37] received moveChunk request: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:30 [conn37] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:30-3", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54834", time: new Date(1339651590788), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, step1 of 6: 0, note: "aborted" } }
m31000| Thu Jun 14 01:26:30 [Balancer] moveChunk result: { errmsg: "migration already in progress", ok: 0.0 }
m31000| Thu Jun 14 01:26:30 [Balancer] balancer move failed: { errmsg: "migration already in progress", ok: 0.0 } from: d1 to: d2 chunk: min: { x: 5850.0 } max: { x: 5850.0 }
m31000| Thu Jun 14 01:26:30 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31100| Thu Jun 14 01:26:31 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: 5850.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Thu Jun 14 01:26:31 [conn8] moveChunk setting version to: 2|0||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:31 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 5850.0 } -> { x: MaxKey }
m31200| Thu Jun 14 01:26:31 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:31-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651591192), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, step1 of 5: 519, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 490 } }
m29000| Thu Jun 14 01:26:31 [initandlisten] connection accepted from 10.255.119.66:51922 #15 (14 connections now open)
m29000| Thu Jun 14 01:26:31 [conn15] authenticate db: local { authenticate: 1, nonce: "c9da1c2876a80ec0", user: "__system", key: "51256e14ef8fb2db0ec1126c053fb5c1" }
m31100| Thu Jun 14 01:26:31 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: 5850.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 }
m31100| Thu Jun 14 01:26:31 [conn8] moveChunk updating self version to: 2|1||4fd975e444bfbb7b7d568221 through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m31100| Thu Jun 14 01:26:31 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:31-4", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651591196), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, from: "d1", to: "d2" } }
m31100| Thu Jun 14 01:26:31 [conn8] doing delete inline
m31100| Thu Jun 14 01:26:31 [conn8] moveChunk deleted: 1
m31100| Thu Jun 14 01:26:32 [conn14] end connection 10.255.119.66:54760 (21 connections now open)
m31100| Thu Jun 14 01:26:32 [initandlisten] connection accepted from 10.255.119.66:54836 #38 (22 connections now open)
m31100| Thu Jun 14 01:26:32 [conn38] authenticate db: local { authenticate: 1, nonce: "f19c51915d196b89", user: "__system", key: "4973a6173ff76c152253df89ab1dd92e" }
m31201| Thu Jun 14 01:26:32 [conn3] end connection 10.255.119.66:47288 (9 connections now open)
m31201| Thu Jun 14 01:26:32 [initandlisten] connection accepted from 10.255.119.66:47335 #11 (10 connections now open)
m31201| Thu Jun 14 01:26:32 [conn11] authenticate db: local { authenticate: 1, nonce: "801a750ad802df29", user: "__system", key: "674d755f2de0b51e367be27ac83178b2" }
m31100| Thu Jun 14 01:26:32 [conn15] end connection 10.255.119.66:54761 (21 connections now open)
m31100| Thu Jun 14 01:26:32 [initandlisten] connection accepted from 10.255.119.66:54838 #39 (22 connections now open)
m31100| Thu Jun 14 01:26:32 [conn39] authenticate db: local { authenticate: 1, nonce: "69857d31b155b68c", user: "__system", key: "f365cf20692ce70ea93d2c6b1f6d4f5e" }
m31100| Thu Jun 14 01:26:33 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31100| Thu Jun 14 01:26:33 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:33-5", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651593200), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 10, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 2003 } }
m31100| Thu Jun 14 01:26:33 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: 5850.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_5850.0", configdb: "domU-12-31-39-01-70-B4:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 W:73 r:9314 w:514064 reslen:37 3032ms
m31000| Thu Jun 14 01:26:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 2|1||4fd975e444bfbb7b7d568221 based on: 1|4||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:33 [initandlisten] connection accepted from 10.255.119.66:34951 #17 (14 connections now open)
m31200| Thu Jun 14 01:26:33 [conn17] authenticate db: local { authenticate: 1, nonce: "b384042ecfa358b6", user: "__system", key: "6665e3825cc4bbb75c1294c163d503ad" }
m31000| Thu Jun 14 01:26:33 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31200 serverID: 4fd975b644bfbb7b7d56821b
m31000| Thu Jun 14 01:26:33 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31201 serverID: 4fd975b644bfbb7b7d56821b
m31000| Thu Jun 14 01:26:33 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31202 serverID: 4fd975b644bfbb7b7d56821b
m31200| Thu Jun 14 01:26:33 [conn17] no current chunk manager found for this shard, will initialize
m31200| Thu Jun 14 01:26:34 [conn3] end connection 10.255.119.66:34907 (13 connections now open)
m31200| Thu Jun 14 01:26:34 [initandlisten] connection accepted from 10.255.119.66:34952 #18 (14 connections now open)
m31200| Thu Jun 14 01:26:34 [conn18] authenticate db: local { authenticate: 1, nonce: "f31136f656b262ff", user: "__system", key: "bd88ecd3b1612d14d4c646bd23f26664" }
m31200| Thu Jun 14 01:26:34 [conn8] request split points lookup for chunk test.foo { : 5850.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:34 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 5850.0 } -->> { : MaxKey }
m29000| Thu Jun 14 01:26:34 [initandlisten] connection accepted from 10.255.119.66:51928 #16 (15 connections now open)
m29000| Thu Jun 14 01:26:34 [conn16] authenticate db: local { authenticate: 1, nonce: "252c8799ca514b0d", user: "__system", key: "86ca0d3c353daf4d5c7ac9e1f278cf63" }
m31200| Thu Jun 14 01:26:34 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 5850.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 17642.0 } ], shardId: "test.foo-x_5850.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31200| Thu Jun 14 01:26:34 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Thu Jun 14 01:26:34 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:31200:1339651594:292043064 (sleeping for 30000ms)
m31200| Thu Jun 14 01:26:34 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' acquired, ts : 4fd9760aa1dd57646fff0e6f
m31200| Thu Jun 14 01:26:34 [conn8] splitChunk accepted at version 2|0||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:34 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:34-1", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651594966), what: "split", ns: "test.foo", details: { before: { min: { x: 5850.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 5850.0 }, max: { x: 17642.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 17642.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31200| Thu Jun 14 01:26:34 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' unlocked.
m31000| Thu Jun 14 01:26:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 2|3||4fd975e444bfbb7b7d568221 based on: 2|1||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:34 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|0||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey } on: { x: 17642.0 } (splitThreshold 943718) (migrate suggested)
m31200| Thu Jun 14 01:26:35 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:35 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:35 [conn4] end connection 10.255.119.66:34908 (13 connections now open)
m31200| Thu Jun 14 01:26:35 [initandlisten] connection accepted from 10.255.119.66:34954 #19 (14 connections now open)
m31200| Thu Jun 14 01:26:35 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:35 [conn19] authenticate db: local { authenticate: 1, nonce: "aee1f642c1e216a7", user: "__system", key: "013398c1355fcb9f45ffff73bba311e0" }
m31201| Thu Jun 14 01:26:36 [initandlisten] connection accepted from 10.255.119.66:47342 #12 (11 connections now open)
m31201| Thu Jun 14 01:26:36 [conn12] authenticate db: local { authenticate: 1, nonce: "6f7cfc0bba1acc16", user: "__system", key: "44595dcfe7c4559b600e96acaabe4eee" }
m31202| Thu Jun 14 01:26:36 [initandlisten] connection accepted from 10.255.119.66:58090 #11 (11 connections now open)
m31202| Thu Jun 14 01:26:36 [conn11] authenticate db: local { authenticate: 1, nonce: "b33d7b641559213b", user: "__system", key: "b58bd6d1da2acb09ad214e6c99c53bdb" }
m31200| Thu Jun 14 01:26:36 [initandlisten] connection accepted from 10.255.119.66:34955 #20 (15 connections now open)
m31200| Thu Jun 14 01:26:36 [conn20] authenticate db: local { authenticate: 1, nonce: "50325fe9b6f4b70e", user: "__system", key: "92f70918961eb98d5565c2a8cb14ead9" }
m31200| Thu Jun 14 01:26:36 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:36 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31101| Thu Jun 14 01:26:36 [clientcursormon] mem (MB) res:51 virt:327 mapped:128
m31200| Thu Jun 14 01:26:36 [conn8] request split points lookup for chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:36 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 17642.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:36 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 17642.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 28772.0 } ], shardId: "test.foo-x_17642.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31200| Thu Jun 14 01:26:36 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Thu Jun 14 01:26:36 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' acquired, ts : 4fd9760ca1dd57646fff0e70
m31200| Thu Jun 14 01:26:36 [conn8] splitChunk accepted at version 2|3||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:36 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:36-2", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651596701), what: "split", ns: "test.foo", details: { before: { min: { x: 17642.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 17642.0 }, max: { x: 28772.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 28772.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31200| Thu Jun 14 01:26:36 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' unlocked.
m31000| Thu Jun 14 01:26:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 2|5||4fd975e444bfbb7b7d568221 based on: 2|3||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:36 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|3||000000000000000000000000 min: { x: 17642.0 } max: { x: MaxKey } on: { x: 28772.0 } (splitThreshold 943718) (migrate suggested)
m31200| Thu Jun 14 01:26:36 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:36 [clientcursormon] mem (MB) res:52 virt:354 mapped:112
m31102| Thu Jun 14 01:26:36 [clientcursormon] mem (MB) res:51 virt:327 mapped:128
m31200| Thu Jun 14 01:26:36 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:37 [FileAllocator] allocating new datafile /data/db/d2-0/test.1, filling with zeroes...
m31200| Thu Jun 14 01:26:37 [FileAllocator] done allocating datafile /data/db/d2-0/test.1, size: 32MB, took 0.616 secs
m31200| Thu Jun 14 01:26:37 [conn17] insert test.foo keyUpdates:0 locks(micros) W:456 r:350 w:2106672 617ms
m31200| Thu Jun 14 01:26:37 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:38 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:38 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31201| Thu Jun 14 01:26:38 [rsSync] replSet initial sync pending
m31201| Thu Jun 14 01:26:38 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:38 [initandlisten] connection accepted from 10.255.119.66:34958 #21 (16 connections now open)
m31200| Thu Jun 14 01:26:38 [conn21] authenticate db: local { authenticate: 1, nonce: "ef8a25011f51a951", user: "__system", key: "67031d0f306df27e61eb6fd88c9b61ac" }
m31201| Thu Jun 14 01:26:38 [rsSync] build index local.me { _id: 1 }
m31201| Thu Jun 14 01:26:38 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:26:38 [rsSync] replSet initial sync drop all databases
m31201| Thu Jun 14 01:26:38 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Thu Jun 14 01:26:38 [rsSync] replSet initial sync clone all databases
m31201| Thu Jun 14 01:26:38 [rsSync] replSet initial sync cloning db: test
m31200| Thu Jun 14 01:26:38 [initandlisten] connection accepted from 10.255.119.66:34959 #22 (17 connections now open)
m31200| Thu Jun 14 01:26:38 [conn22] authenticate db: local { authenticate: 1, nonce: "c89d336c0909d6fa", user: "__system", key: "02532bf5ebcec1bea79afb2c0e03c3db" }
m31201| Thu Jun 14 01:26:38 [FileAllocator] allocating new datafile /data/db/d2-1/test.ns, filling with zeroes...
m31202| Thu Jun 14 01:26:38 [rsSync] replSet initial sync pending
m31202| Thu Jun 14 01:26:38 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:38 [initandlisten] connection accepted from 10.255.119.66:34960 #23 (18 connections now open)
m31200| Thu Jun 14 01:26:38 [conn23] authenticate db: local { authenticate: 1, nonce: "49e9aee5d520c365", user: "__system", key: "61f9913501f3799e354558c1fdfe5d50" }
m31202| Thu Jun 14 01:26:38 [rsSync] build index local.me { _id: 1 }
m31202| Thu Jun 14 01:26:38 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:26:38 [rsSync] replSet initial sync drop all databases
m31202| Thu Jun 14 01:26:38 [rsSync] dropAllDatabasesExceptLocal 1
m31202| Thu Jun 14 01:26:38 [rsSync] replSet initial sync clone all databases
m31202| Thu Jun 14 01:26:38 [rsSync] replSet initial sync cloning db: test
m31200| Thu Jun 14 01:26:38 [initandlisten] connection accepted from 10.255.119.66:34961 #24 (19 connections now open)
m31200| Thu Jun 14 01:26:38 [conn24] authenticate db: local { authenticate: 1, nonce: "35ac4763503ffe", user: "__system", key: "db4efd9ee11c6bf4aa5f713088aa69e7" }
m31202| Thu Jun 14 01:26:38 [FileAllocator] allocating new datafile /data/db/d2-2/test.ns, filling with zeroes...
m31200| Thu Jun 14 01:26:38 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:39 [conn8] request split points lookup for chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:39 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 28772.0 } -->> { : MaxKey }
m31200| Thu Jun 14 01:26:39 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 28772.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 40449.0 } ], shardId: "test.foo-x_28772.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31200| Thu Jun 14 01:26:39 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Thu Jun 14 01:26:39 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' acquired, ts : 4fd9760fa1dd57646fff0e71
m31200| Thu Jun 14 01:26:39 [conn8] splitChunk accepted at version 2|5||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:39 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:39-3", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651599275), what: "split", ns: "test.foo", details: { before: { min: { x: 28772.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 28772.0 }, max: { x: 40449.0 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 40449.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31200| Thu Jun 14 01:26:39 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' unlocked.
m31000| Thu Jun 14 01:26:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 2|7||4fd975e444bfbb7b7d568221 based on: 2|5||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:39 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|5||000000000000000000000000 min: { x: 28772.0 } max: { x: MaxKey } on: { x: 40449.0 } (splitThreshold 943718) (migrate suggested)
m31000| Thu Jun 14 01:26:39 [conn] moving chunk (auto): ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|7||000000000000000000000000 min: { x: 40449.0 } max: { x: MaxKey } to: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31000| Thu Jun 14 01:26:39 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|7||000000000000000000000000 min: { x: 40449.0 } max: { x: MaxKey }) d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 -> d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31200| Thu Jun 14 01:26:39 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", to: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", fromShard: "d2", toShard: "d1", min: { x: 40449.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_40449.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31200| Thu Jun 14 01:26:39 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Thu Jun 14 01:26:39 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' acquired, ts : 4fd9760fa1dd57646fff0e72
m31200| Thu Jun 14 01:26:39 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:39-4", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651599280), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 40449.0 }, max: { x: MaxKey }, from: "d2", to: "d1" } }
m31200| Thu Jun 14 01:26:39 [conn8] moveChunk request accepted at version 2|7||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:26:39 [conn8] moveChunk number of documents: 1
m31201| Thu Jun 14 01:26:39 [FileAllocator] done allocating datafile /data/db/d2-1/test.ns, size: 16MB, took 0.82 secs
m31202| Thu Jun 14 01:26:39 [FileAllocator] done allocating datafile /data/db/d2-2/test.ns, size: 16MB, took 0.812 secs
m31202| Thu Jun 14 01:26:39 [FileAllocator] allocating new datafile /data/db/d2-2/test.0, filling with zeroes...
m31201| Thu Jun 14 01:26:39 [FileAllocator] allocating new datafile /data/db/d2-1/test.0, filling with zeroes...
m31200| Thu Jun 14 01:26:40 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", min: { x: 40449.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31201| Thu Jun 14 01:26:40 [FileAllocator] done allocating datafile /data/db/d2-1/test.0, size: 16MB, took 0.793 secs
m31200| Thu Jun 14 01:26:40 [conn22] end connection 10.255.119.66:34959 (18 connections now open)
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34962 #25 (19 connections now open)
m31200| Thu Jun 14 01:26:40 [conn25] authenticate db: local { authenticate: 1, nonce: "fc341fd18a7a19d9", user: "__system", key: "ef865a4f7c17a129215dd242d4fcd60c" }
m31200| Thu Jun 14 01:26:40 [conn25] end connection 10.255.119.66:34962 (18 connections now open)
m31201| Thu Jun 14 01:26:40 [rsSync] build index test.foo { _id: 1 }
m31201| Thu Jun 14 01:26:40 [rsSync] fastBuildIndex dupsToDrop:0
m31201| Thu Jun 14 01:26:40 [rsSync] build index done. scanned 34600 total records. 0.127 secs
m31201| Thu Jun 14 01:26:40 [rsSync] replSet initial sync cloning db: admin
m31201| Thu Jun 14 01:26:40 [rsSync] replSet initial sync data copy, starting syncup
m31202| Thu Jun 14 01:26:40 [FileAllocator] done allocating datafile /data/db/d2-2/test.0, size: 16MB, took 0.851 secs
m31202| Thu Jun 14 01:26:40 [rsSync] build index test.foo { _id: 1 }
m31201| Thu Jun 14 01:26:40 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34963 #26 (19 connections now open)
m31202| Thu Jun 14 01:26:40 [rsSync] fastBuildIndex dupsToDrop:0
m31202| Thu Jun 14 01:26:40 [rsSync] build index done. scanned 34600 total records. 0.131 secs
m31202| Thu Jun 14 01:26:40 [rsSync] replSet initial sync cloning db: admin
m31202| Thu Jun 14 01:26:40 [rsSync] replSet initial sync data copy, starting syncup
m31200| Thu Jun 14 01:26:40 [conn26] authenticate db: local { authenticate: 1, nonce: "7ed72035c8c3ec04", user: "__system", key: "32ed2eb25b7cae1a424cf9929633bc62" }
m31200| Thu Jun 14 01:26:40 [conn24] end connection 10.255.119.66:34961 (18 connections now open)
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34964 #27 (19 connections now open)
m31200| Thu Jun 14 01:26:40 [conn27] authenticate db: local { authenticate: 1, nonce: "ebab9c57f9c479e2", user: "__system", key: "05e2328eac8f30db1546de4e90689a71" }
m31200| Thu Jun 14 01:26:40 [conn27] end connection 10.255.119.66:34964 (18 connections now open)
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34965 #28 (19 connections now open)
m31200| Thu Jun 14 01:26:40 [conn28] authenticate db: local { authenticate: 1, nonce: "7f6c7e9183d92ab8", user: "__system", key: "19a78fcee5d8ac45cc67b6c8010b25cc" }
m31000| Thu Jun 14 01:26:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' acquired, ts : 4fd9761044bfbb7b7d568226
m31000| Thu Jun 14 01:26:40 [Balancer] ---- ShardInfoMap
m31000| Thu Jun 14 01:26:40 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Thu Jun 14 01:26:40 [Balancer] d2 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m31000| Thu Jun 14 01:26:40 [Balancer] ---- ShardToChunksMap
m31000| Thu Jun 14 01:26:40 [Balancer] d1
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Thu Jun 14 01:26:40 [Balancer] d2
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 17642.0 }, shard: "d2" }
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_17642.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 17642.0 }, max: { x: 28772.0 }, shard: "d2" }
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_28772.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 28772.0 }, max: { x: 40449.0 }, shard: "d2" }
m31000| Thu Jun 14 01:26:40 [Balancer] { _id: "test.foo-x_40449.0", lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 40449.0 }, max: { x: MaxKey }, shard: "d2" }
m31000| Thu Jun 14 01:26:40 [Balancer] ----
m31000| Thu Jun 14 01:26:40 [Balancer] chose [d2] to [d1] { _id: "test.foo-x_5850.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 17642.0 }, shard: "d2" }
m31000| Thu Jun 14 01:26:40 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 lastmod: 2|2||000000000000000000000000 min: { x: 5850.0 } max: { x: 17642.0 }) d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202 -> d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31000| Thu Jun 14 01:26:40 [Balancer] moveChunk result: { errmsg: "migration already in progress", ok: 0.0 }
m31000| Thu Jun 14 01:26:40 [Balancer] balancer move failed: { errmsg: "migration already in progress", ok: 0.0 } from: d2 to: d1 chunk: Assertion: 13655:BSONElement: bad type 111
m31000| 0x84f514a 0x8126495 0x83f3537 0x811e4ce 0x8121cf1 0x8488fac 0x82c589c 0x8128991 0x82c32b3 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0xa64542 0x1e5b6e
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEi+0x20e) [0x811e4ce]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo7BSONObj8toStringERNS_17StringBuilderImplINS_16TrivialAllocatorEEEbbi+0xf1) [0x8121cf1]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14BalancerPolicy9ChunkInfo8toStringEv+0x7c) [0x8488fac]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14LazyStringImplINS_14BalancerPolicy9ChunkInfoEE3valEv+0x2c) [0x82c589c]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo9LogstreamlsERKNS_10LazyStringE+0x31) [0x8128991]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x853) [0x82c32b3]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m31000| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m31000| /lib/i686/nosegneg/libpthread.so.0 [0xa64542]
m31000| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x1e5b6e]
m31000| Thu Jun 14 01:26:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:31000:1339651510:1804289383' unlocked.
m31000| Thu Jun 14 01:26:40 [Balancer] scoped connection to domU-12-31-39-01-70-B4:29000 not being returned to the pool
m31000| Thu Jun 14 01:26:40 [Balancer] caught exception while doing balance: BSONElement: bad type 111
m31200| Thu Jun 14 01:26:40 [conn28] received moveChunk request: { moveChunk: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", to: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", fromShard: "d2", toShard: "d1", min: { x: 5850.0 }, max: { x: 17642.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_5850.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31200| Thu Jun 14 01:26:40 [conn28] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:40-5", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34965", time: new Date(1339651600796), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: 17642.0 }, step1 of 6: 0, note: "aborted" } }
m29000| Thu Jun 14 01:26:40 [conn10] end connection 10.255.119.66:36525 (14 connections now open)
m31201| Thu Jun 14 01:26:40 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34966 #29 (20 connections now open)
m31200| Thu Jun 14 01:26:40 [conn29] authenticate db: local { authenticate: 1, nonce: "91dc8a686668658f", user: "__system", key: "44c1c66b96af7f30b4e37b5154344b91" }
m31201| Thu Jun 14 01:26:40 [rsSync] replSet initial sync building indexes
m31201| Thu Jun 14 01:26:40 [rsSync] replSet initial sync cloning indexes for : test
m31201| Thu Jun 14 01:26:40 [rsSync] build index test.foo { x: 1.0 }
m31200| Thu Jun 14 01:26:40 [initandlisten] connection accepted from 10.255.119.66:34967 #30 (21 connections now open)
m31200| Thu Jun 14 01:26:40 [conn30] authenticate db: local { authenticate: 1, nonce: "74ab039774d84fbb", user: "__system", key: "7fcbd812fbf91b49a0d225e92fb3f797" }
m31201| Thu Jun 14 01:26:40 [FileAllocator] allocating new datafile /data/db/d2-1/test.1, filling with zeroes...
m31100| Thu Jun 14 01:26:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 40449.0 } -> { x: MaxKey }
m31200| Thu Jun 14 01:26:41 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", min: { x: 40449.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31202| Thu Jun 14 01:26:41 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:41 [initandlisten] connection accepted from 10.255.119.66:34968 #31 (22 connections now open)
m31200| Thu Jun 14 01:26:41 [conn31] authenticate db: local { authenticate: 1, nonce: "d3874259264751c4", user: "__system", key: "05d2581073b5191c5044fe0d95856402" }
m31201| Thu Jun 14 01:26:41 [FileAllocator] done allocating datafile /data/db/d2-1/test.1, size: 32MB, took 0.733 secs
m31201| Thu Jun 14 01:26:41 [rsSync] build index done. scanned 34600 total records. 0.858 secs
m31201| Thu Jun 14 01:26:41 [rsSync] replSet initial sync cloning indexes for : admin
m31200| Thu Jun 14 01:26:41 [conn30] end connection 10.255.119.66:34967 (21 connections now open)
m31201| Thu Jun 14 01:26:41 [rsSync] replSet initial sync query minValid
m31201| Thu Jun 14 01:26:41 [rsSync] replSet initial sync finishing up
m31201| Thu Jun 14 01:26:41 [rsSync] replSet set minValid=4fd9760f:62c
m31201| Thu Jun 14 01:26:41 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Thu Jun 14 01:26:41 [rsSync] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:26:41 [initandlisten] connection accepted from 10.255.119.66:34969 #32 (22 connections now open)
m31200| Thu Jun 14 01:26:41 [conn32] authenticate db: local { authenticate: 1, nonce: "99b118ce1e5afaf2", user: "__system", key: "1727b663067471f1926475e8bf8019ed" }
m31200| Thu Jun 14 01:26:41 [conn32] end connection 10.255.119.66:34969 (21 connections now open)
m31201| Thu Jun 14 01:26:41 [rsSync] replSet initial sync done
m31200| Thu Jun 14 01:26:41 [conn21] end connection 10.255.119.66:34958 (20 connections now open)
m31202| Thu Jun 14 01:26:41 [rsSync] replSet initial sync building indexes
m31202| Thu Jun 14 01:26:41 [rsSync] replSet initial sync cloning indexes for : test
m31202| Thu Jun 14 01:26:41 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:26:41 [initandlisten] connection accepted from 10.255.119.66:34970 #33 (21 connections now open)
m31200| Thu Jun 14 01:26:41 [conn33] authenticate db: local { authenticate: 1, nonce: "dcea30b10e8c80db", user: "__system", key: "b06c66b150185ee04c7d4507dab3cc5c" }
m31200| Thu Jun 14 01:26:41 [initandlisten] connection accepted from 10.255.119.66:34971 #34 (22 connections now open)
m31200| Thu Jun 14 01:26:41 [conn34] authenticate db: local { authenticate: 1, nonce: "99841656b410f266", user: "__system", key: "e6152224137ed4669fd41d00db8443cc" }
m31202| Thu Jun 14 01:26:41 [rsSync] build index test.foo { x: 1.0 }
m31202| Thu Jun 14 01:26:41 [FileAllocator] allocating new datafile /data/db/d2-2/test.1, filling with zeroes...
m31200| Thu Jun 14 01:26:42 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", min: { x: 40449.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31200| Thu Jun 14 01:26:42 [conn8] moveChunk setting version to: 3|0||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:42 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 40449.0 } -> { x: MaxKey }
m31100| Thu Jun 14 01:26:42 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:42-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651602301), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 40449.0 }, max: { x: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 3019 } }
m31200| Thu Jun 14 01:26:42 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", min: { x: 40449.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 }
m31200| Thu Jun 14 01:26:42 [conn8] moveChunk updating self version to: 3|1||4fd975e444bfbb7b7d568221 through { x: 5850.0 } -> { x: 17642.0 } for collection 'test.foo'
m31200| Thu Jun 14 01:26:42 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:42-6", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651602305), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 40449.0 }, max: { x: MaxKey }, from: "d2", to: "d1" } }
m31200| Thu Jun 14 01:26:42 [conn8] doing delete inline
m31200| Thu Jun 14 01:26:42 [conn8] moveChunk deleted: 1
m31200| Thu Jun 14 01:26:42 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31200:1339651594:292043064' unlocked.
m31200| Thu Jun 14 01:26:42 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:42-7", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:34918", time: new Date(1339651602306), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 40449.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 3008, step5 of 6: 16, step6 of 6: 0 } }
m31200| Thu Jun 14 01:26:42 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", to: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", fromShard: "d2", toShard: "d1", min: { x: 40449.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_40449.0", configdb: "domU-12-31-39-01-70-B4:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:132354 w:437 reslen:37 3027ms
m31000| Thu Jun 14 01:26:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 3|1||4fd975e444bfbb7b7d568221 based on: 2|7||4fd975e444bfbb7b7d568221
m31201| Thu Jun 14 01:26:42 [rsSync] replSet SECONDARY
m31202| Thu Jun 14 01:26:42 [FileAllocator] done allocating datafile /data/db/d2-2/test.1, size: 32MB, took 0.585 secs
m31202| Thu Jun 14 01:26:42 [rsSync] build index done. scanned 34600 total records. 0.709 secs
m31202| Thu Jun 14 01:26:42 [rsSync] replSet initial sync cloning indexes for : admin
m31202| Thu Jun 14 01:26:42 [rsSync] replSet initial sync query minValid
m31202| Thu Jun 14 01:26:42 [rsSync] replSet initial sync finishing up
m31202| Thu Jun 14 01:26:42 [rsSync] replSet set minValid=4fd97612:1
m31202| Thu Jun 14 01:26:42 [rsSync] build index local.replset.minvalid { _id: 1 }
m31202| Thu Jun 14 01:26:42 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:26:42 [rsSync] replSet initial sync done
m31200| Thu Jun 14 01:26:42 [conn33] end connection 10.255.119.66:34970 (21 connections now open)
m31200| Thu Jun 14 01:26:42 [initandlisten] connection accepted from 10.255.119.66:34972 #35 (22 connections now open)
m31200| Thu Jun 14 01:26:42 [conn35] authenticate db: local { authenticate: 1, nonce: "dbc4a74fb525c8fb", user: "__system", key: "ca7ae8c61d08ba46f06d5a86708b0d01" }
m31200| Thu Jun 14 01:26:42 [conn35] end connection 10.255.119.66:34972 (21 connections now open)
m31200| Thu Jun 14 01:26:42 [conn23] end connection 10.255.119.66:34960 (20 connections now open)
m31200| Thu Jun 14 01:26:42 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
m31100| Thu Jun 14 01:26:43 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 22 locks(micros) r:40768 nreturned:3062 reslen:52074 211ms
m31202| Thu Jun 14 01:26:43 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:26:43 [conn8] request split points lookup for chunk test.foo { : 40449.0 } -->> { : MaxKey }
m31202| Thu Jun 14 01:26:43 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state SECONDARY
m31100| Thu Jun 14 01:26:43 [conn8] request split points lookup for chunk test.foo { : 40449.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:44 [conn8] request split points lookup for chunk test.foo { : 40449.0 } -->> { : MaxKey }
m31102| Thu Jun 14 01:26:44 [conn10] end connection 10.255.119.66:54733 (10 connections now open)
m31102| Thu Jun 14 01:26:44 [initandlisten] connection accepted from 10.255.119.66:54802 #14 (11 connections now open)
m31102| Thu Jun 14 01:26:44 [conn14] authenticate db: local { authenticate: 1, nonce: "fa48a38ad363ad72", user: "__system", key: "0c3597af8bbaaa273532a249dbca2b44" }
m31100| Thu Jun 14 01:26:44 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 9 locks(micros) r:188582 nreturned:981 reslen:152075 192ms
m31200| Thu Jun 14 01:26:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state SECONDARY
m31100| Thu Jun 14 01:26:44 [conn8] request split points lookup for chunk test.foo { : 40449.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:44 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 40449.0 } -->> { : MaxKey }
m31201| Thu Jun 14 01:26:44 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31202 is now in state SECONDARY
m31000| Thu Jun 14 01:26:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 3|3||4fd975e444bfbb7b7d568221 based on: 3|1||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:44 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 3|0||000000000000000000000000 min: { x: 40449.0 } max: { x: MaxKey } on: { x: 51379.0 } (splitThreshold 943718) (migrate suggested)
m31100| Thu Jun 14 01:26:44 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 40449.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 51379.0 } ], shardId: "test.foo-x_40449.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:44 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:44 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976140bf7ca455ce4e47c
m31100| Thu Jun 14 01:26:44 [conn8] splitChunk accepted at version 3|0||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:44 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:44-7", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651604930), what: "split", ns: "test.foo", details: { before: { min: { x: 40449.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 40449.0 }, max: { x: 51379.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 51379.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:44 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31100| Thu Jun 14 01:26:45 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:45 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:45 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:46 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 12 locks(micros) r:83998 nreturned:2232 reslen:37964 162ms
m31100| Thu Jun 14 01:26:46 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:46 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:46 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 12 locks(micros) r:91132 nreturned:2157 reslen:36689 123ms
m31102| Thu Jun 14 01:26:46 [conn11] end connection 10.255.119.66:54734 (10 connections now open)
m31102| Thu Jun 14 01:26:46 [initandlisten] connection accepted from 10.255.119.66:54803 #15 (11 connections now open)
m31102| Thu Jun 14 01:26:46 [conn15] authenticate db: local { authenticate: 1, nonce: "c1da18344eb47c67", user: "__system", key: "ef52a9e1ea7c35e0516d85111584d608" }
m31100| Thu Jun 14 01:26:46 [FileAllocator] allocating new datafile /data/db/d1-0/test.1, filling with zeroes...
m31101| Thu Jun 14 01:26:46 [conn11] end connection 10.255.119.66:46113 (10 connections now open)
m31101| Thu Jun 14 01:26:46 [initandlisten] connection accepted from 10.255.119.66:46182 #15 (11 connections now open)
m31101| Thu Jun 14 01:26:46 [conn15] authenticate db: local { authenticate: 1, nonce: "1a8e787880757aef", user: "__system", key: "11c566a3f6562002d2b0f466995d19fe" }
m31202| Thu Jun 14 01:26:46 [conn3] end connection 10.255.119.66:58036 (10 connections now open)
m31202| Thu Jun 14 01:26:46 [initandlisten] connection accepted from 10.255.119.66:58109 #12 (11 connections now open)
m31202| Thu Jun 14 01:26:46 [conn12] authenticate db: local { authenticate: 1, nonce: "541dd22fdf3f891b", user: "__system", key: "103524cf90aa66188edca88d90967c38" }
m31101| Thu Jun 14 01:26:47 [FileAllocator] allocating new datafile /data/db/d1-1/test.1, filling with zeroes...
m31102| Thu Jun 14 01:26:47 [FileAllocator] allocating new datafile /data/db/d1-2/test.1, filling with zeroes...
m31100| Thu Jun 14 01:26:47 [FileAllocator] done allocating datafile /data/db/d1-0/test.1, size: 32MB, took 0.93 secs
m31100| Thu Jun 14 01:26:47 [conn9] insert test.foo keyUpdates:0 locks(micros) W:510 r:816 w:3349279 931ms
m31100| Thu Jun 14 01:26:47 [conn8] request split points lookup for chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:47 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 51379.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:47 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 51379.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 62468.0 } ], shardId: "test.foo-x_51379.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:47 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:47 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976170bf7ca455ce4e47d
m31100| Thu Jun 14 01:26:47 [conn8] splitChunk accepted at version 3|3||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:47 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:47-8", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651607952), what: "split", ns: "test.foo", details: { before: { min: { x: 51379.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 51379.0 }, max: { x: 62468.0 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 62468.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:47 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31000| Thu Jun 14 01:26:47 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 3|5||4fd975e444bfbb7b7d568221 based on: 3|3||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:47 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 3|3||000000000000000000000000 min: { x: 51379.0 } max: { x: MaxKey } on: { x: 62468.0 } (splitThreshold 943718) (migrate suggested)
m31100| Thu Jun 14 01:26:48 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:48 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31202| Thu Jun 14 01:26:48 [conn4] end connection 10.255.119.66:58042 (10 connections now open)
m31202| Thu Jun 14 01:26:49 [initandlisten] connection accepted from 10.255.119.66:58110 #13 (11 connections now open)
m31202| Thu Jun 14 01:26:49 [conn13] authenticate db: local { authenticate: 1, nonce: "804e2d4f38bdf8c5", user: "__system", key: "3501f426ef07c9e20aee0de4289ad969" }
m31100| Thu Jun 14 01:26:48 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:48 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:49 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31102| Thu Jun 14 01:26:49 [FileAllocator] done allocating datafile /data/db/d1-2/test.1, size: 32MB, took 1.496 secs
m31100| Thu Jun 14 01:26:49 [conn8] request split points lookup for chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:49 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 62468.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:49 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 62468.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 73489.0 } ], shardId: "test.foo-x_62468.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:49 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:49 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976190bf7ca455ce4e47e
m31100| Thu Jun 14 01:26:49 [conn8] splitChunk accepted at version 3|5||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:49 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:49-9", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651609860), what: "split", ns: "test.foo", details: { before: { min: { x: 62468.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 62468.0 }, max: { x: 73489.0 }, lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 73489.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:49 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31201| Thu Jun 14 01:26:49 [conn4] end connection 10.255.119.66:47296 (10 connections now open)
m31201| Thu Jun 14 01:26:49 [initandlisten] connection accepted from 10.255.119.66:47364 #13 (11 connections now open)
m31201| Thu Jun 14 01:26:49 [conn13] authenticate db: local { authenticate: 1, nonce: "88d12b9a9d251b05", user: "__system", key: "0ead9226bc21eca3f797918181ae9709" }
m31100| Thu Jun 14 01:26:50 [conn9] insert test.foo keyUpdates:0 locks(micros) W:510 r:1148 w:4336005 132ms
m31100| Thu Jun 14 01:26:50 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:50 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:448817 nreturned:1313 reslen:203535 131ms
m31000| Thu Jun 14 01:26:49 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 3|7||4fd975e444bfbb7b7d568221 based on: 3|5||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:49 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 3|5||000000000000000000000000 min: { x: 62468.0 } max: { x: MaxKey } on: { x: 73489.0 } (splitThreshold 943718) (migrate suggested)
m31101| Thu Jun 14 01:26:49 [FileAllocator] done allocating datafile /data/db/d1-1/test.1, size: 32MB, took 1.647 secs
m31100| Thu Jun 14 01:26:50 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 10 locks(micros) r:134655 nreturned:3354 reslen:57038 107ms
m31100| Thu Jun 14 01:26:50 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:51 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:51 [conn9] insert test.foo keyUpdates:0 locks(micros) W:510 r:1148 w:4802239 134ms
m31100| Thu Jun 14 01:26:51 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:51 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 13 locks(micros) r:209771 nreturned:3107 reslen:52839 192ms
m31100| Thu Jun 14 01:26:51 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 8 locks(micros) r:263678 nreturned:915 reslen:15575 104ms
m31100| Thu Jun 14 01:26:51 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:52 [conn8] request split points lookup for chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:52 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 73489.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:52 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 73489.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 84666.0 } ], shardId: "test.foo-x_73489.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:52 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:52 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd9761c0bf7ca455ce4e47f
m31100| Thu Jun 14 01:26:52 [conn8] splitChunk accepted at version 3|7||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:52 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:52-10", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651612231), what: "split", ns: "test.foo", details: { before: { min: { x: 73489.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 73489.0 }, max: { x: 84666.0 }, lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 84666.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:52 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31000| Thu Jun 14 01:26:52 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 3|9||4fd975e444bfbb7b7d568221 based on: 3|7||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:52 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 3|7||000000000000000000000000 min: { x: 73489.0 } max: { x: MaxKey } on: { x: 84666.0 } (splitThreshold 943718) (migrate suggested)
m31100| Thu Jun 14 01:26:52 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:52 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:53 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:53 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:53 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 11 locks(micros) r:625634 nreturned:1663 reslen:257785 143ms
m31100| Thu Jun 14 01:26:53 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:53 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31000| Thu Jun 14 01:26:54 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 3|11||4fd975e444bfbb7b7d568221 based on: 3|9||4fd975e444bfbb7b7d568221
m31000| Thu Jun 14 01:26:54 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 3|9||000000000000000000000000 min: { x: 84666.0 } max: { x: MaxKey } on: { x: 96830.0 } (splitThreshold 943718) (migrate suggested)
m31100| Thu Jun 14 01:26:54 [conn8] request split points lookup for chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:54 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 84666.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:54 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 84666.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 96830.0 } ], shardId: "test.foo-x_84666.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:26:54 [conn8] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:26:54 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd9761e0bf7ca455ce4e480
m31100| Thu Jun 14 01:26:54 [conn8] splitChunk accepted at version 3|9||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:26:54 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:26:54-11", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54748", time: new Date(1339651614225), what: "split", ns: "test.foo", details: { before: { min: { x: 84666.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 84666.0 }, max: { x: 96830.0 }, lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') }, right: { min: { x: 96830.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221') } } }
m31100| Thu Jun 14 01:26:54 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
null
chunks: 8 3 11
m31200| Thu Jun 14 01:26:54 [conn20] getmore test.foo cursorid:4554715836668274614 ntoreturn:0 keyUpdates:0 locks(micros) r:118240 nreturned:34498 reslen:3346326 118ms
m31100| Thu Jun 14 01:26:54 [conn8] request split points lookup for chunk test.foo { : 96830.0 } -->> { : MaxKey }
m31100| Thu Jun 14 01:26:54 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 6 locks(micros) r:338208 nreturned:4556 reslen:77472 188ms
ReplSetTest waitForIndicator state on connection to domU-12-31-39-01-70-B4:31201
[ 2 ]
ReplSetTest waitForIndicator from node connection to domU-12-31-39-01-70-B4:31201
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d2",
"date" : ISODate("2012-06-14T05:26:56Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 31
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 9
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31200, checking domU-12-31-39-01-70-B4:31201/domU-12-31-39-01-70-B4:31201
Status for : domU-12-31-39-01-70-B4:31201, checking domU-12-31-39-01-70-B4:31201/domU-12-31-39-01-70-B4:31201
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d2",
"date" : ISODate("2012-06-14T05:26:56Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 31
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 9
}
],
"ok" : 1
}
ReplSetTest waitForIndicator state on connection to domU-12-31-39-01-70-B4:31202
[ 2 ]
ReplSetTest waitForIndicator from node connection to domU-12-31-39-01-70-B4:31202
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d2",
"date" : ISODate("2012-06-14T05:26:56Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 31
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 9
}
],
"ok" : 1
}
Status for : domU-12-31-39-01-70-B4:31200, checking domU-12-31-39-01-70-B4:31202/domU-12-31-39-01-70-B4:31202
Status for : domU-12-31-39-01-70-B4:31201, checking domU-12-31-39-01-70-B4:31202/domU-12-31-39-01-70-B4:31202
Status for : domU-12-31-39-01-70-B4:31202, checking domU-12-31-39-01-70-B4:31202/domU-12-31-39-01-70-B4:31202
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d2",
"date" : ISODate("2012-06-14T05:26:56Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"self" : true
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 31
},
{
"_id" : 2,
"name" : "domU-12-31-39-01-70-B4:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339651602000, 1),
"optimeDate" : ISODate("2012-06-14T05:26:42Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:26:54Z"),
"pingMs" : 9
}
],
"ok" : 1
}
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd976206f2560a99818e217")
}
m31100| Thu Jun 14 01:26:56 [FileAllocator] allocating new datafile /data/db/d1-0/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:26:56 [FileAllocator] done allocating datafile /data/db/d1-0/admin.ns, size: 16MB, took 0.283 secs
m31100| Thu Jun 14 01:26:56 [FileAllocator] allocating new datafile /data/db/d1-0/admin.0, filling with zeroes...
m31100| Thu Jun 14 01:26:56 [FileAllocator] done allocating datafile /data/db/d1-0/admin.0, size: 16MB, took 0.446 secs
m31100| Thu Jun 14 01:26:56 [conn2] build index admin.system.users { _id: 1 }
m31100| Thu Jun 14 01:26:56 [conn2] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:26:56 [conn2] insert admin.system.users keyUpdates:0 locks(micros) W:1483276 r:841 w:739695 739ms
could not find getLastError object : "getlasterror failed: { \"errmsg\" : \"need to login\", \"ok\" : 0 }"
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd976206f2560a99818e218")
}
m31200| Thu Jun 14 01:26:56 [FileAllocator] allocating new datafile /data/db/d2-0/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:26:56 [FileAllocator] allocating new datafile /data/db/d1-1/admin.ns, filling with zeroes...
m31102| Thu Jun 14 01:26:56 [FileAllocator] allocating new datafile /data/db/d1-2/admin.ns, filling with zeroes...
m31200| Thu Jun 14 01:26:57 [FileAllocator] done allocating datafile /data/db/d2-0/admin.ns, size: 16MB, took 0.6 secs
m31200| Thu Jun 14 01:26:57 [FileAllocator] allocating new datafile /data/db/d2-0/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:26:57 [FileAllocator] done allocating datafile /data/db/d1-2/admin.ns, size: 16MB, took 1.046 secs
m31101| Thu Jun 14 01:26:57 [FileAllocator] done allocating datafile /data/db/d1-1/admin.ns, size: 16MB, took 1.066 secs
m31101| Thu Jun 14 01:26:57 [FileAllocator] allocating new datafile /data/db/d1-1/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:26:57 [FileAllocator] allocating new datafile /data/db/d1-2/admin.0, filling with zeroes...
m31200| Thu Jun 14 01:26:58 [FileAllocator] done allocating datafile /data/db/d2-0/admin.0, size: 16MB, took 0.89 secs
m31200| Thu Jun 14 01:26:58 [conn2] build index admin.system.users { _id: 1 }
m31200| Thu Jun 14 01:26:58 [conn2] build index done. scanned 0 total records. 0.16 secs
m31200| Thu Jun 14 01:26:58 [conn26] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759801444143335) } } cursorid:242094923706609940 ntoreturn:0 keyUpdates:0 locks(micros) r:6909 nreturned:1 reslen:177 5821ms
m31200| Thu Jun 14 01:26:58 [conn2] insert admin.system.users keyUpdates:0 locks(micros) W:2115427 r:596 w:1662693 1662ms
could not find getLastError object : "getlasterror failed: { \"errmsg\" : \"need to login\", \"ok\" : 0 }"
m31000| Thu Jun 14 01:26:58 [conn] authenticate db: test { authenticate: 1.0, user: "bar", nonce: "885f276798ae246e", key: "78fa535e078e60c642fb4c2e8856c703" }
m31100| Thu Jun 14 01:26:58 [conn32] end connection 10.255.119.66:54780 (21 connections now open)
m31101| Thu Jun 14 01:26:58 [initandlisten] connection accepted from 10.255.119.66:46186 #16 (12 connections now open)
m31101| Thu Jun 14 01:26:58 [conn16] authenticate db: local { authenticate: 1, nonce: "9aa064a4df9cf99c", user: "__system", key: "4801c75cb6f66a29d7c41b796ce01dd2" }
m31200| Thu Jun 14 01:26:58 [conn31] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759801444143344) } } cursorid:4387162329158333692 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:7044 nreturned:1 reslen:177 5840ms
m31202| Thu Jun 14 01:26:58 [FileAllocator] allocating new datafile /data/db/d2-2/admin.ns, filling with zeroes...
m31102| Thu Jun 14 01:26:58 [FileAllocator] done allocating datafile /data/db/d1-2/admin.0, size: 16MB, took 0.882 secs
m31102| Thu Jun 14 01:26:58 [rsSync] build index admin.system.users { _id: 1 }
m31102| Thu Jun 14 01:26:58 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:58 [FileAllocator] done allocating datafile /data/db/d1-1/admin.0, size: 16MB, took 0.896 secs
m31101| Thu Jun 14 01:26:58 [rsSync] build index admin.system.users { _id: 1 }
m31101| Thu Jun 14 01:26:58 [rsSync] build index done. scanned 0 total records. 0 secs
{ "dbname" : "test", "user" : "bar", "readOnly" : false, "ok" : 1 }
testing map reduce
m31201| Thu Jun 14 01:26:58 [FileAllocator] allocating new datafile /data/db/d2-1/admin.ns, filling with zeroes...
m31202| Thu Jun 14 01:26:59 [FileAllocator] done allocating datafile /data/db/d2-2/admin.ns, size: 16MB, took 0.767 secs
m31202| Thu Jun 14 01:26:59 [FileAllocator] allocating new datafile /data/db/d2-2/admin.0, filling with zeroes...
m31200| Thu Jun 14 01:26:59 [conn17] CMD: drop test.tmp.mr.foo_0_inc
m31200| Thu Jun 14 01:26:59 [conn17] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31200| Thu Jun 14 01:26:59 [conn17] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:26:59 [conn17] CMD: drop test.tmp.mr.foo_0
m31200| Thu Jun 14 01:26:59 [conn17] build index test.tmp.mr.foo_0 { _id: 1 }
m31200| Thu Jun 14 01:26:59 [conn17] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:26:59 [conn9] CMD: drop test.tmp.mr.foo_0_inc
m31100| Thu Jun 14 01:26:59 [conn9] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31100| Thu Jun 14 01:26:59 [conn9] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:26:59 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Thu Jun 14 01:26:59 [conn9] build index test.tmp.mr.foo_0 { _id: 1 }
m31101| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31101| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:59 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31101| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31101| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31101| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31102| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:59 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31102| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31102| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31102| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:26:59 [conn9] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:26:59 [FileAllocator] done allocating datafile /data/db/d2-1/admin.ns, size: 16MB, took 0.539 secs
m31201| Thu Jun 14 01:26:59 [FileAllocator] allocating new datafile /data/db/d2-1/admin.0, filling with zeroes...
m29000| Thu Jun 14 01:26:59 [initandlisten] connection accepted from 10.255.119.66:51955 #17 (15 connections now open)
m29000| Thu Jun 14 01:26:59 [conn17] authenticate db: local { authenticate: 1, nonce: "189149918f35893", user: "__system", key: "7289676d93fb0adc010806b9e5d7e39b" }
m30999| Thu Jun 14 01:26:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd976232fbdcaaf7b2c0732
m31202| Thu Jun 14 01:26:59 [FileAllocator] done allocating datafile /data/db/d2-2/admin.0, size: 16MB, took 0.591 secs
m31202| Thu Jun 14 01:26:59 [rsSync] build index admin.system.users { _id: 1 }
m31202| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31202| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:26:59 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31202| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31202| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Thu Jun 14 01:26:59 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31202| Thu Jun 14 01:26:59 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:27:00 [FileAllocator] done allocating datafile /data/db/d2-1/admin.0, size: 16MB, took 0.659 secs
m31201| Thu Jun 14 01:27:00 [rsSync] build index admin.system.users { _id: 1 }
m31201| Thu Jun 14 01:27:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:27:00 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31201| Thu Jun 14 01:27:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:27:00 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31201| Thu Jun 14 01:27:00 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31201| Thu Jun 14 01:27:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Thu Jun 14 01:27:00 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31201| Thu Jun 14 01:27:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:27:00 [conn13] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:65 reslen:2046 619ms
m30999| Thu Jun 14 01:27:00 [Balancer] ---- ShardInfoMap
m31100| Thu Jun 14 01:27:00 [conn13] received moveChunk request: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:27:00 [conn13] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:27:00 [conn13] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd976240bf7ca455ce4e481
m31100| Thu Jun 14 01:27:00 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:00-12", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651620752), what: "moveChunk.start", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, from: "d1", to: "d2" } }
m31100| Thu Jun 14 01:27:00 [conn13] moveChunk request accepted at version 3|11||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:27:00 [conn13] moveChunk number of documents: 0
m30999| Thu Jun 14 01:27:00 [Balancer] d1 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:00 [Balancer] d2 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:00 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:27:00 [Balancer] d1
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_40449.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 40449.0 }, max: { x: 51379.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_51379.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 51379.0 }, max: { x: 62468.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_62468.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 62468.0 }, max: { x: 73489.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_73489.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 73489.0 }, max: { x: 84666.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_84666.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 84666.0 }, max: { x: 96830.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_96830.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 96830.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] d2
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 17642.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_17642.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 17642.0 }, max: { x: 28772.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:00 [Balancer] { _id: "test.foo-x_28772.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 28772.0 }, max: { x: 40449.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:00 [Balancer] ----
m30999| Thu Jun 14 01:27:00 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:00 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|1||000000000000000000000000 min: { x: MinKey } max: { x: 0.0 }) d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 -> d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31101| Thu Jun 14 01:27:00 [conn14] end connection 10.255.119.66:46152 (11 connections now open)
m31101| Thu Jun 14 01:27:00 [initandlisten] connection accepted from 10.255.119.66:46188 #17 (12 connections now open)
m31101| Thu Jun 14 01:27:00 [conn17] authenticate db: local { authenticate: 1, nonce: "a69fca0bd95ae707", user: "__system", key: "3ab6bb5ef010432f80ce2eadbaae894a" }
m31200| Thu Jun 14 01:27:01 [conn34] getmore local.oplog.rs query: { ts: { $gte: new Date(0) } } cursorid:4547028274705151353 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:909812 nreturned:5 reslen:105 1473ms
m31100| Thu Jun 14 01:27:01 [conn13] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Thu Jun 14 01:27:02 [conn9] 48000/65401 73%
m31200| Thu Jun 14 01:27:02 [conn29] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759805739107884) } } cursorid:6856341299666432453 ntoreturn:0 keyUpdates:0 numYields: 8 locks(micros) r:41117 nreturned:9014 reslen:153258 101ms
m31100| Thu Jun 14 01:27:02 [conn13] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Thu Jun 14 01:27:02 [conn38] end connection 10.255.119.66:54836 (20 connections now open)
m31100| Thu Jun 14 01:27:02 [initandlisten] connection accepted from 10.255.119.66:54870 #40 (21 connections now open)
m31100| Thu Jun 14 01:27:02 [conn40] authenticate db: local { authenticate: 1, nonce: "3e8e83397b76f61", user: "__system", key: "36cebfe4afc95b39c87be9c9ad8aa8a2" }
m31100| Thu Jun 14 01:27:03 [conn39] end connection 10.255.119.66:54838 (20 connections now open)
m31100| Thu Jun 14 01:27:03 [initandlisten] connection accepted from 10.255.119.66:54871 #41 (21 connections now open)
m31100| Thu Jun 14 01:27:03 [conn41] authenticate db: local { authenticate: 1, nonce: "c08b422fadd13f1b", user: "__system", key: "d5a74226013cde9b398e2608c021db74" }
m31201| Thu Jun 14 01:27:03 [conn11] end connection 10.255.119.66:47335 (10 connections now open)
m31201| Thu Jun 14 01:27:03 [initandlisten] connection accepted from 10.255.119.66:47370 #14 (11 connections now open)
m31201| Thu Jun 14 01:27:03 [conn14] authenticate db: local { authenticate: 1, nonce: "ab5a96a466ff01a3", user: "__system", key: "482549d8f2598f060f3e83cc66dcb0d9" }
m31200| Thu Jun 14 01:27:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: MinKey } -> { x: 0.0 }
m31100| Thu Jun 14 01:27:03 [conn13] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Thu Jun 14 01:27:03 [conn13] moveChunk setting version to: 4|0||4fd975e444bfbb7b7d568221
m31100| Thu Jun 14 01:27:03 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m31100| Thu Jun 14 01:27:03 [conn13] moveChunk updating self version to: 4|1||4fd975e444bfbb7b7d568221 through { x: 0.0 } -> { x: 5850.0 } for collection 'test.foo'
m31100| Thu Jun 14 01:27:03 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:03-13", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651623846), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, from: "d1", to: "d2" } }
m31200| Thu Jun 14 01:27:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: MinKey } -> { x: 0.0 }
m31200| Thu Jun 14 01:27:03 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:03-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651623829), what: "moveChunk.to", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, step1 of 5: 403, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 2611 } }
m31100| Thu Jun 14 01:27:03 [conn13] forking for cleaning up chunk data
m31100| Thu Jun 14 01:27:03 [conn13] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31100| Thu Jun 14 01:27:03 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:03-14", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651623848), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 7, step4 of 6: 3049, step5 of 6: 37, step6 of 6: 0 } }
m31100| Thu Jun 14 01:27:03 [conn13] command admin.$cmd command: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:132 w:22 reslen:37 3096ms
m31100| Thu Jun 14 01:27:03 [cleanupOldData] (start) waiting to cleanup test.foo from { x: MinKey } -> { x: 0.0 } # cursors:1
m31100| Thu Jun 14 01:27:03 [cleanupOldData] (looping 1) waiting to cleanup test.foo from { x: MinKey } -> { x: 0.0 } # cursors:1
m31100| Thu Jun 14 01:27:03 [cleanupOldData] cursors: 5473891331102570922
m31200| Thu Jun 14 01:27:04 [conn17] 14900/34599 43%
m30999| Thu Jun 14 01:27:04 [Balancer] ChunkManager: time to load chunks for test.foo: 123ms sequenceNumber: 3 version: 4|1||4fd975e444bfbb7b7d568221 based on: 1|2||4fd975e444bfbb7b7d568221
m30999| Thu Jun 14 01:27:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31200| Thu Jun 14 01:27:05 [conn18] end connection 10.255.119.66:34952 (19 connections now open)
m31200| Thu Jun 14 01:27:05 [initandlisten] connection accepted from 10.255.119.66:34985 #36 (20 connections now open)
m31200| Thu Jun 14 01:27:05 [conn36] authenticate db: local { authenticate: 1, nonce: "3f95d32bd824a4bd", user: "__system", key: "a77405bc3f35cea026ce10803706327f" }
m31200| Thu Jun 14 01:27:05 [conn19] end connection 10.255.119.66:34954 (19 connections now open)
m31200| Thu Jun 14 01:27:05 [initandlisten] connection accepted from 10.255.119.66:34986 #37 (20 connections now open)
m31200| Thu Jun 14 01:27:05 [conn37] authenticate db: local { authenticate: 1, nonce: "8c75ce0af6b9f91", user: "__system", key: "26684e3960bf909a9123952f786daa29" }
m31100| Thu Jun 14 01:27:06 [conn9] 18000/65401 27%
m31200| Thu Jun 14 01:27:06 [conn17] CMD: drop test.tmp.mrs.foo_1339651619_0
m31200| Thu Jun 14 01:27:07 [conn17] CMD: drop test.tmp.mr.foo_0
m31200| Thu Jun 14 01:27:07 [conn17] CMD: drop test.tmp.mr.foo_0
m31200| Thu Jun 14 01:27:07 [conn17] CMD: drop test.tmp.mr.foo_0_inc
m31201| Thu Jun 14 01:27:07 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31200| Thu Jun 14 01:27:07 [conn17] command test.$cmd command: { mapreduce: "foo", map: function () {
m31200| emit(this.x, 1);
m31200| }, reduce: function (key, values) {
m31200| return values.length;
m31200| }, out: "tmp.mrs.foo_1339651619_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 34945 locks(micros) W:101036 r:2162077 w:7822766 reslen:148 7807ms
m31202| Thu Jun 14 01:27:07 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31100| Thu Jun 14 01:27:07 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 21 locks(micros) r:839205 nreturned:4117 reslen:411720 127ms
m31100| Thu Jun 14 01:27:08 [conn29] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:8356589039904927899 ntoreturn:0 keyUpdates:0 numYields: 12 locks(micros) r:649835 nreturned:1885 reslen:188520 133ms
m31100| Thu Jun 14 01:27:08 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 12 locks(micros) r:547454 nreturned:3980 reslen:67680 135ms
m31100| Thu Jun 14 01:27:09 [conn9] 44700/65401 68%
m31100| Thu Jun 14 01:27:09 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 11 locks(micros) r:485268 nreturned:1991 reslen:33867 114ms
m30999| Thu Jun 14 01:27:09 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd9762d2fbdcaaf7b2c0733
m30999| Thu Jun 14 01:27:09 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:27:09 [Balancer] d1 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:09 [Balancer] d2 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:09 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:27:09 [Balancer] d1
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_40449.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 40449.0 }, max: { x: 51379.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_51379.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 51379.0 }, max: { x: 62468.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_62468.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 62468.0 }, max: { x: 73489.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_73489.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 73489.0 }, max: { x: 84666.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_84666.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 84666.0 }, max: { x: 96830.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_96830.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 96830.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] d2
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 17642.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_17642.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 17642.0 }, max: { x: 28772.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:09 [Balancer] { _id: "test.foo-x_28772.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 28772.0 }, max: { x: 40449.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:09 [Balancer] ----
m30999| Thu Jun 14 01:27:09 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_0.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:09 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 4|1||000000000000000000000000 min: { x: 0.0 } max: { x: 5850.0 }) d1:d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 -> d2:d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202
m31100| Thu Jun 14 01:27:09 [conn13] received moveChunk request: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: 0.0 }, max: { x: 5850.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_0.0", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:27:09 [conn13] created new distributed lock for test.foo on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31202| Thu Jun 14 01:27:09 [clientcursormon] mem (MB) res:78 virt:391 mapped:192
m31100| Thu Jun 14 01:27:09 [conn13] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' acquired, ts : 4fd9762d0bf7ca455ce4e482
m31100| Thu Jun 14 01:27:09 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:09-15", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651629242), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 5850.0 }, from: "d1", to: "d2" } }
m31100| Thu Jun 14 01:27:09 [conn13] moveChunk request accepted at version 4|1||4fd975e444bfbb7b7d568221
m31201| Thu Jun 14 01:27:09 [clientcursormon] mem (MB) res:79 virt:392 mapped:192
m31200| Thu Jun 14 01:27:09 [clientcursormon] mem (MB) res:115 virt:455 mapped:176
m31100| Thu Jun 14 01:27:09 [conn13] moveChunk number of documents: 5850
m31100| Thu Jun 14 01:27:09 [cleanupOldData] (looping 201) waiting to cleanup test.foo from { x: MinKey } -> { x: 0.0 } # cursors:1
m31100| Thu Jun 14 01:27:09 [cleanupOldData] cursors: 5473891331102570922
m31100| Thu Jun 14 01:27:10 [conn13] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: 0.0 }, max: { x: 5850.0 }, shardKeyPattern: { x: 1 }, state: "clone", counts: { cloned: 5850, clonedBytes: 567450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m29000| Thu Jun 14 01:27:10 [initandlisten] connection accepted from 10.255.119.66:51962 #18 (16 connections now open)
m29000| Thu Jun 14 01:27:10 [conn18] authenticate db: local { authenticate: 1, nonce: "654642cd80496fd", user: "__system", key: "ca658b6fcf48c7cb260dc6f10c0ab6d3" }
m29000| Thu Jun 14 01:27:10 [initandlisten] connection accepted from 10.255.119.66:51963 #19 (17 connections now open)
m29000| Thu Jun 14 01:27:10 [conn19] authenticate db: local { authenticate: 1, nonce: "7939b4c512c8d96d", user: "__system", key: "20a31c78b289e9505529d9767419b39b" }
m31200| Thu Jun 14 01:27:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 0.0 } -> { x: 5850.0 }
m31100| Thu Jun 14 01:27:11 [conn13] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: 0.0 }, max: { x: 5850.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 5850, clonedBytes: 567450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Thu Jun 14 01:27:11 [conn13] moveChunk setting version to: 5|0||4fd975e444bfbb7b7d568221
m31200| Thu Jun 14 01:27:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 0.0 } -> { x: 5850.0 }
m31200| Thu Jun 14 01:27:11 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:11-9", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651631330), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 5850.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1043, step4 of 5: 0, step5 of 5: 1021 } }
m31100| Thu Jun 14 01:27:11 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", min: { x: 0.0 }, max: { x: 5850.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 5850, clonedBytes: 567450, catchup: 0, steady: 0 }, ok: 1.0 }
m31100| Thu Jun 14 01:27:11 [conn13] moveChunk updating self version to: 5|1||4fd975e444bfbb7b7d568221 through { x: 40449.0 } -> { x: 51379.0 } for collection 'test.foo'
m31100| Thu Jun 14 01:27:11 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:11-16", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651631340), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 5850.0 }, from: "d1", to: "d2" } }
m31100| Thu Jun 14 01:27:11 [conn13] forking for cleaning up chunk data
m31100| Thu Jun 14 01:27:11 [conn13] distributed lock 'test.foo/domU-12-31-39-01-70-B4:31100:1339651588:969901886' unlocked.
m31100| Thu Jun 14 01:27:11 [conn13] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:11-17", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:54758", time: new Date(1339651631341), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 5850.0 }, step1 of 6: 0, step2 of 6: 9, step3 of 6: 20, step4 of 6: 2047, step5 of 6: 28, step6 of 6: 0 } }
m31100| Thu Jun 14 01:27:11 [conn13] command admin.$cmd command: { moveChunk: "test.foo", from: "d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", to: "d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202", fromShard: "d1", toShard: "d2", min: { x: 0.0 }, max: { x: 5850.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_0.0", configdb: "domU-12-31-39-01-70-B4:29000" } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:10083 w:44 reslen:37 2107ms
m31100| Thu Jun 14 01:27:11 [cleanupOldData] (start) waiting to cleanup test.foo from { x: 0.0 } -> { x: 5850.0 } # cursors:1
m30999| Thu Jun 14 01:27:11 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 5|1||4fd975e444bfbb7b7d568221 based on: 4|1||4fd975e444bfbb7b7d568221
m30999| Thu Jun 14 01:27:11 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31100| Thu Jun 14 01:27:11 [cleanupOldData] (looping 1) waiting to cleanup test.foo from { x: 0.0 } -> { x: 5850.0 } # cursors:1
m31100| Thu Jun 14 01:27:11 [cleanupOldData] cursors: 5473891331102570922
m31100| Thu Jun 14 01:27:11 [conn9] CMD: drop test.tmp.mrs.foo_1339651619_0
m31100| Thu Jun 14 01:27:11 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Thu Jun 14 01:27:11 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Thu Jun 14 01:27:11 [conn9] CMD: drop test.tmp.mr.foo_0_inc
m31100| Thu Jun 14 01:27:11 [conn9] command test.$cmd command: { mapreduce: "foo", map: function () {
m31100| emit(this.x, 1);
m31100| }, reduce: function (key, values) {
m31100| return values.length;
m31100| }, out: "tmp.mrs.foo_1339651619_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 66056 locks(micros) W:2521 r:3621536 w:15435141 reslen:148 12557ms
m31100| Thu Jun 14 01:27:11 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Thu Jun 14 01:27:11 [conn9] build index test.tmp.mr.foo_1 { _id: 1 }
m31100| Thu Jun 14 01:27:11 [conn9] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:27:11 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31100| Thu Jun 14 01:27:11 [cleanupOldData] moveChunk deleted: 0
m31102| Thu Jun 14 01:27:11 [rsSync] build index test.tmp.mr.foo_1 { _id: 1 }
m31102| Thu Jun 14 01:27:11 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:27:11 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31101| Thu Jun 14 01:27:11 [rsSync] build index test.tmp.mr.foo_1 { _id: 1 }
m31101| Thu Jun 14 01:27:11 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:27:11 [conn9] ChunkManager: time to load chunks for test.foo: 12ms sequenceNumber: 2 version: 5|1||4fd975e444bfbb7b7d568221 based on: (empty)
m31100| Thu Jun 14 01:27:11 [conn9] starting new replica set monitor for replica set d1 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54877 #42 (22 connections now open)
m31100| Thu Jun 14 01:27:11 [conn9] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set d1
m31100| Thu Jun 14 01:27:11 [conn9] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from d1/
m31100| Thu Jun 14 01:27:11 [conn9] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set d1
m31100| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54878 #43 (23 connections now open)
m31100| Thu Jun 14 01:27:11 [conn9] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set d1
m31100| Thu Jun 14 01:27:11 [conn9] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set d1
m31100| Thu Jun 14 01:27:11 [conn9] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set d1
m31100| Thu Jun 14 01:27:11 [conn9] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set d1
m31101| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:46198 #18 (13 connections now open)
m31102| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54821 #16 (12 connections now open)
m31100| Thu Jun 14 01:27:11 [conn9] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set d1
m31100| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54881 #44 (24 connections now open)
m31100| Thu Jun 14 01:27:11 [conn44] authenticate db: local { authenticate: 1, nonce: "fd0aeb10cbc72f3e", user: "__system", key: "95da9b4a972196496b548e58f709de5d" }
m31100| Thu Jun 14 01:27:11 [conn42] end connection 10.255.119.66:54877 (23 connections now open)
m31100| Thu Jun 14 01:27:11 [conn9] Primary for replica set d1 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:46201 #19 (14 connections now open)
m31101| Thu Jun 14 01:27:11 [conn19] authenticate db: local { authenticate: 1, nonce: "c4755845db6fedf3", user: "__system", key: "2bd8ce259c52bdfb777b110a6df15b69" }
m31102| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54824 #17 (13 connections now open)
m31102| Thu Jun 14 01:27:11 [conn17] authenticate db: local { authenticate: 1, nonce: "1e625d38fa177419", user: "__system", key: "516155f0f4ff63471cd760272e1e8861" }
m31100| Thu Jun 14 01:27:11 [conn9] replica set monitor for replica set d1 started, address is d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:54884 #45 (24 connections now open)
m31100| Thu Jun 14 01:27:11 [conn45] authenticate db: local { authenticate: 1, nonce: "f423bd64e3316c50", user: "__system", key: "2c3ea0a348b178f8295a7cd06909d1e0" }
m31200| Thu Jun 14 01:27:11 [initandlisten] connection accepted from 10.255.119.66:34997 #38 (21 connections now open)
m31200| Thu Jun 14 01:27:11 [conn38] authenticate db: local { authenticate: 1, nonce: "377fc664b437d76f", user: "__system", key: "79632def20e3ece43a540c79e1ad8c24" }
m31100| Thu Jun 14 01:27:12 [conn44] getmore test.tmp.mrs.foo_1339651619_0 query: { query: {}, orderby: { _id: 1 } } cursorid:6606850046760954709 ntoreturn:0 keyUpdates:0 numYields: 25 locks(micros) r:186358 nreturned:65300 reslen:2154920 362ms
m31100| Thu Jun 14 01:27:12 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 18 locks(micros) r:615591 nreturned:3364 reslen:57208 149ms
m31100| Thu Jun 14 01:27:12 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 14 locks(micros) r:1134035 nreturned:3629 reslen:358576 155ms
m31100| Thu Jun 14 01:27:13 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 19 locks(micros) r:626846 nreturned:3981 reslen:67697 328ms
m31100| Thu Jun 14 01:27:13 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 10 locks(micros) r:686753 nreturned:3166 reslen:53842 104ms
m31100| Thu Jun 14 01:27:13 [conn29] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:8356589039904927899 ntoreturn:0 keyUpdates:0 numYields: 17 locks(micros) r:1090411 nreturned:2051 reslen:198070 191ms
m31100| Thu Jun 14 01:27:13 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 14 locks(micros) r:698181 nreturned:2200 reslen:37420 205ms
m31100| Thu Jun 14 01:27:13 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 21 locks(micros) r:765951 nreturned:2599 reslen:44203 269ms
m31100| Thu Jun 14 01:27:13 [cleanupOldData] moveChunk deleted: 5850
m31100| Thu Jun 14 01:27:14 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 3 locks(micros) r:760180 nreturned:5390 reslen:91650 104ms
m31100| Thu Jun 14 01:27:14 [conn27] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:2976538272780824479 ntoreturn:0 keyUpdates:0 numYields: 3 locks(micros) r:1278813 nreturned:3081 reslen:308120 108ms
m31102| Thu Jun 14 01:27:14 [conn14] end connection 10.255.119.66:54802 (12 connections now open)
m31102| Thu Jun 14 01:27:15 [initandlisten] connection accepted from 10.255.119.66:54827 #18 (13 connections now open)
m31102| Thu Jun 14 01:27:15 [conn18] authenticate db: local { authenticate: 1, nonce: "1e7daeacc7dfca66", user: "__system", key: "fced63ca02ef3844b37c0f40479c22e2" }
m31100| Thu Jun 14 01:27:15 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 numYields: 15 locks(micros) r:889323 nreturned:7190 reslen:122250 256ms
m30999| Thu Jun 14 01:27:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' acquired, ts : 4fd976342fbdcaaf7b2c0734
m30999| Thu Jun 14 01:27:16 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:27:16 [Balancer] d1 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:16 [Balancer] d2 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:16 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:27:16 [Balancer] d1
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_40449.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 40449.0 }, max: { x: 51379.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_51379.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 51379.0 }, max: { x: 62468.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_62468.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 62468.0 }, max: { x: 73489.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_73489.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 73489.0 }, max: { x: 84666.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_84666.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 84666.0 }, max: { x: 96830.0 }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_96830.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 96830.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Thu Jun 14 01:27:16 [Balancer] d2
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 17642.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_17642.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 17642.0 }, max: { x: 28772.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:16 [Balancer] { _id: "test.foo-x_28772.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd975e444bfbb7b7d568221'), ns: "test.foo", min: { x: 28772.0 }, max: { x: 40449.0 }, shard: "d2" }
m30999| Thu Jun 14 01:27:16 [Balancer] ----
m30999| Thu Jun 14 01:27:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651509:1804289383' unlocked.
m31100| Thu Jun 14 01:27:16 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 4 locks(micros) r:874638 nreturned:2922 reslen:49694 110ms
m31100| Thu Jun 14 01:27:16 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 locks(micros) r:1039010 nreturned:6975 reslen:118595 164ms
m31100| Thu Jun 14 01:27:16 [conn30] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:4590850208278294899 ntoreturn:0 keyUpdates:0 locks(micros) r:1088813 nreturned:9337 reslen:158749 193ms
m31102| Thu Jun 14 01:27:16 [conn15] end connection 10.255.119.66:54803 (12 connections now open)
m31102| Thu Jun 14 01:27:16 [initandlisten] connection accepted from 10.255.119.66:54828 #19 (13 connections now open)
m31102| Thu Jun 14 01:27:16 [conn19] authenticate db: local { authenticate: 1, nonce: "917cc993331bfc6a", user: "__system", key: "1ee1ab89557e57dcd6938850d78b5f6f" }
m31202| Thu Jun 14 01:27:17 [conn12] end connection 10.255.119.66:58109 (10 connections now open)
m31202| Thu Jun 14 01:27:17 [initandlisten] connection accepted from 10.255.119.66:58133 #14 (11 connections now open)
m31202| Thu Jun 14 01:27:17 [conn14] authenticate db: local { authenticate: 1, nonce: "b2dd950231b83139", user: "__system", key: "c2d9320b9bc3792a63a35893b31cf834" }
m31101| Thu Jun 14 01:27:17 [conn15] end connection 10.255.119.66:46182 (13 connections now open)
m31101| Thu Jun 14 01:27:17 [initandlisten] connection accepted from 10.255.119.66:46208 #20 (14 connections now open)
m31101| Thu Jun 14 01:27:17 [conn20] authenticate db: local { authenticate: 1, nonce: "81bf6191e7421d22", user: "__system", key: "aceec8439b9f7c7d86c20fc1f333a991" }
m31100| Thu Jun 14 01:27:18 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 3 locks(micros) r:1103031 nreturned:4173 reslen:70961 178ms
m31202| Thu Jun 14 01:27:19 [conn13] end connection 10.255.119.66:58110 (10 connections now open)
m31202| Thu Jun 14 01:27:19 [initandlisten] connection accepted from 10.255.119.66:58135 #15 (11 connections now open)
m31202| Thu Jun 14 01:27:19 [conn15] authenticate db: local { authenticate: 1, nonce: "431090006dd61684", user: "__system", key: "700674c908913804839868b590e3aab5" }
m31201| Thu Jun 14 01:27:20 [conn13] end connection 10.255.119.66:47364 (10 connections now open)
m31201| Thu Jun 14 01:27:20 [initandlisten] connection accepted from 10.255.119.66:47389 #15 (11 connections now open)
m31201| Thu Jun 14 01:27:20 [conn15] authenticate db: local { authenticate: 1, nonce: "e1b7f42848368970", user: "__system", key: "5ad646bdca7e9b7945fff468884caa8e" }
m31100| Thu Jun 14 01:27:21 [conn9] CMD: drop test.mrout
m31100| Thu Jun 14 01:27:21 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Thu Jun 14 01:27:21 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Thu Jun 14 01:27:21 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Thu Jun 14 01:27:21 [conn9] command test.$cmd command: { mapreduce.shardedfinish: { mapreduce: "foo", map: function () {
m31100| emit(this.x, 1);
m31100| }, reduce: function (key, values) {
m31100| return values.length;
m31100| }, out: "mrout" }, inputDB: "test", shardedOutputCollection: "tmp.mrs.foo_1339651619_0", shards: { d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102: { result: "tmp.mrs.foo_1339651619_0", timeMillis: 12550, counts: { input: 65401, emit: 65401, reduce: 0, output: 65401 }, ok: 1.0 }, d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202: { result: "tmp.mrs.foo_1339651619_0", timeMillis: 7805, counts: { input: 34599, emit: 34599, reduce: 0, output: 34599 }, ok: 1.0 } }, shardCounts: { d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102: { input: 65401, emit: 65401, reduce: 0, output: 65401 }, d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202: { input: 34599, emit: 34599, reduce: 0, output: 34599 } }, counts: { emit: 100000, input: 100000, output: 100000, reduce: 0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:4506 r:3621550 w:23977312 reslen:150 9226ms
m31100| Thu Jun 14 01:27:21 [conn28] getmore local.oplog.rs query: { ts: { $gte: new Date(5753759621055512577) } } cursorid:1751418011445932127 ntoreturn:0 keyUpdates:0 numYields: 32 locks(micros) r:1188437 nreturned:7195 reslen:122335 303ms
m31100| Thu Jun 14 01:27:21 [conn37] CMD: drop test.tmp.mrs.foo_1339651619_0
m31101| Thu Jun 14 01:27:21 [rsSync] CMD: drop test.tmp.mrs.foo_1339651619_0
m31200| Thu Jun 14 01:27:21 [conn28] CMD: drop test.tmp.mrs.foo_1339651619_0
{
"result" : "mrout",
"counts" : {
"input" : NumberLong(100000),
"emit" : NumberLong(100000),
"reduce" : NumberLong(0),
"output" : NumberLong(100000)
},
"timeMillis" : 21941,
"timing" : {
"shardProcessing" : 12588,
"postProcessing" : 9352
},
"shardCounts" : {
"d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" : {
"input" : 65401,
"emit" : 65401,
"reduce" : 0,
"output" : 65401
},
"d2/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201,domU-12-31-39-01-70-B4:31202" : {
"input" : 34599,
"emit" : 34599,
"reduce" : 0,
"output" : 34599
}
},
"postProcessCounts" : {
"d1/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" : {
"input" : NumberLong(100000),
"reduce" : NumberLong(0),
"output" : NumberLong(100000)
}
},
"ok" : 1
}
m31201| Thu Jun 14 01:27:21 [rsSync] CMD: drop test.tmp.mrs.foo_1339651619_0
m31202| Thu Jun 14 01:27:21 [rsSync] CMD: drop test.tmp.mrs.foo_1339651619_0
Thu Jun 14 01:27:21 shell: started program /mnt/slaves/Linux_32bit/mongo/mongodump --host 127.0.0.1:31000 -d test -u bar -p baz
m31102| Thu Jun 14 01:27:21 [rsSync] CMD: drop test.tmp.mrs.foo_1339651619_0
m31000| Thu Jun 14 01:27:21 [mongosMain] connection accepted from 127.0.0.1:46372 #2 (2 connections now open)
sh22127| connected to: 127.0.0.1:31000
m31000| Thu Jun 14 01:27:21 [conn] authenticate db: test { authenticate: 1, nonce: "65629743bbd309d0", user: "bar", key: "4d543ca321b1c74ef3a209ceb05db1f9" }
sh22127| Thu Jun 14 01:27:22 DATABASE: test to dump/test
m31100| Thu Jun 14 01:27:22 [initandlisten] connection accepted from 10.255.119.66:54893 #46 (25 connections now open)
m31100| Thu Jun 14 01:27:22 [conn46] authenticate db: local { authenticate: 1, nonce: "1d363ed1e6325822", user: "__system", key: "7261d6234d94e26573c40319b9be48b4" }
m31200| Thu Jun 14 01:27:22 [initandlisten] connection accepted from 10.255.119.66:35006 #39 (22 connections now open)
m31200| Thu Jun 14 01:27:22 [conn39] authenticate db: local { authenticate: 1, nonce: "bc093cd878a85b66", user: "__system", key: "db427ee5cec1fef618aacdb9a8c5f974" }
m31102| Thu Jun 14 01:27:22 [initandlisten] connection accepted from 10.255.119.66:54836 #20 (14 connections now open)
m31102| Thu Jun 14 01:27:22 [conn20] authenticate db: local { authenticate: 1, nonce: "3f741eeda6eaa8fd", user: "__system", key: "7677792f9881e38163747b2114686f75" }
sh22127| Thu Jun 14 01:27:22 test.foo to dump/test/foo.bson
m31000| Thu Jun 14 01:27:22 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 5|1||4fd975e444bfbb7b7d568221 based on: 3|11||4fd975e444bfbb7b7d568221
m31201| Thu Jun 14 01:27:22 [initandlisten] connection accepted from 10.255.119.66:47394 #16 (12 connections now open)
m31201| Thu Jun 14 01:27:22 [conn16] authenticate db: local { authenticate: 1, nonce: "79fba96878324e04", user: "__system", key: "883fc450dece72096c0c36aa751e4057" }
m31201| Thu Jun 14 01:27:22 [conn12] getmore test.foo query: { query: {}, $snapshot: true } cursorid:6570113967412597888 ntoreturn:0 keyUpdates:0 locks(micros) r:157204 nreturned:40348 reslen:3913776 157ms
m31102| Thu Jun 14 01:27:22 [conn9] getmore test.foo query: { query: {}, $snapshot: true } cursorid:496128121707549823 ntoreturn:0 keyUpdates:0 locks(micros) r:103986 nreturned:43240 reslen:4194300 103ms
sh22127| Thu Jun 14 01:27:23 100000 objects
sh22127| Thu Jun 14 01:27:23 Metadata for test.foo to dump/test/foo.metadata.json
sh22127| Thu Jun 14 01:27:23 test.system.users to dump/test/system.users.bson
sh22127| Thu Jun 14 01:27:23 2 objects
sh22127| Thu Jun 14 01:27:23 Metadata for test.system.users to dump/test/system.users.metadata.json
sh22127| Thu Jun 14 01:27:23 test.mrout to dump/test/mrout.bson
m31102| Thu Jun 14 01:27:23 [conn9] getmore test.mrout query: { query: {}, $snapshot: true } cursorid:1745837500308772638 ntoreturn:0 keyUpdates:0 locks(micros) r:434736 nreturned:99899 reslen:3296687 292ms
sh22127| Thu Jun 14 01:27:23 100000 objects
sh22127| Thu Jun 14 01:27:23 Metadata for test.mrout to dump/test/mrout.metadata.json
m31000| Thu Jun 14 01:27:23 [conn] end connection 127.0.0.1:46372 (1 connection now open)
result: 0
starting read only tests
m31000| Thu Jun 14 01:27:23 [mongosMain] connection accepted from 127.0.0.1:46377 #3 (2 connections now open)
testing find that should fail
m29000| Thu Jun 14 01:27:23 [initandlisten] connection accepted from 10.255.119.66:51985 #20 (18 connections now open)
m29000| Thu Jun 14 01:27:23 [conn20] authenticate db: local { authenticate: 1, nonce: "2ee6c7510a524e06", user: "__system", key: "38cb531b80fe2225a9ef5ddcf0d8ba51" }
logging in
{ "dbname" : "test", "user" : "sad", "readOnly" : true, "ok" : 1 }
testing find that should work
testing write that should fail
testing read command (should succeed)
make sure currentOp/killOp fail
testing logout (should succeed)
make sure currentOp/killOp fail again
m30999| Thu Jun 14 01:27:23 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31000| Thu Jun 14 01:27:23 [conn] authenticate db: test { authenticate: 1.0, user: "sad", nonce: "7a1b4929e9f2b8a3", key: "2dc64b42e637e64c852ef32863275291" }
m31100| Thu Jun 14 01:27:23 [conn13] end connection 10.255.119.66:54758 (24 connections now open)
m31100| Thu Jun 14 01:27:23 [conn11] end connection 10.255.119.66:54752 (23 connections now open)
m29000| Thu Jun 14 01:27:23 [conn4] end connection 10.255.119.66:36515 (17 connections now open)
m29000| Thu Jun 14 01:27:23 [conn3] end connection 10.255.119.66:36514 (17 connections now open)
m29000| Thu Jun 14 01:27:23 [conn5] end connection 10.255.119.66:36516 (15 connections now open)
m29000| Thu Jun 14 01:27:23 [conn17] end connection 10.255.119.66:51955 (14 connections now open)
m31101| Thu Jun 14 01:27:23 [conn8] end connection 10.255.119.66:46075 (13 connections now open)
m31101| Thu Jun 14 01:27:23 [conn7] end connection 10.255.119.66:46072 (12 connections now open)
m31100| Thu Jun 14 01:27:23 [conn12] end connection 10.255.119.66:54755 (22 connections now open)
m29000| Thu Jun 14 01:27:23 [conn7] end connection 10.255.119.66:36521 (13 connections now open)
m31202| Thu Jun 14 01:27:23 [conn8] end connection 10.255.119.66:58060 (10 connections now open)
m31202| Thu Jun 14 01:27:23 [conn7] end connection 10.255.119.66:58057 (9 connections now open)
m31201| Thu Jun 14 01:27:23 [conn8] end connection 10.255.119.66:47312 (11 connections now open)
m31201| Thu Jun 14 01:27:23 [conn7] end connection 10.255.119.66:47309 (10 connections now open)
m31200| Thu Jun 14 01:27:23 [conn12] end connection 10.255.119.66:34928 (21 connections now open)
m31200| Thu Jun 14 01:27:23 [conn10] end connection 10.255.119.66:34922 (20 connections now open)
m31200| Thu Jun 14 01:27:23 [conn11] end connection 10.255.119.66:34925 (19 connections now open)
m31102| Thu Jun 14 01:27:23 [conn7] end connection 10.255.119.66:54695 (13 connections now open)
m31102| Thu Jun 14 01:27:23 [conn8] end connection 10.255.119.66:54698 (12 connections now open)
Thu Jun 14 01:27:24 shell: stopped mongo program on port 30999
m29000| Thu Jun 14 01:27:24 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:27:24 [interruptThread] now exiting
m29000| Thu Jun 14 01:27:24 dbexit:
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:27:24 [interruptThread] closing listening socket: 16
m29000| Thu Jun 14 01:27:24 [interruptThread] closing listening socket: 17
m29000| Thu Jun 14 01:27:24 [interruptThread] closing listening socket: 18
m29000| Thu Jun 14 01:27:24 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:27:24 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:27:24 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:27:24 dbexit: really exiting now
Thu Jun 14 01:27:25 shell: stopped mongo program on port 29000
*** ShardingTest auth1 completed successfully in 137.027 seconds ***
m31000| Thu Jun 14 01:27:25 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31101| Thu Jun 14 01:27:25 [conn16] end connection 10.255.119.66:46186 (11 connections now open)
m31201| Thu Jun 14 01:27:25 [conn16] end connection 10.255.119.66:47394 (9 connections now open)
m31201| Thu Jun 14 01:27:25 [conn5] end connection 10.255.119.66:47299 (8 connections now open)
m31200| Thu Jun 14 01:27:25 [conn28] end connection 10.255.119.66:34965 (18 connections now open)
m31200| Thu Jun 14 01:27:25 [conn39] end connection 10.255.119.66:35006 (17 connections now open)
m31200| Thu Jun 14 01:27:25 [conn6] end connection 10.255.119.66:34912 (16 connections now open)
m31102| Thu Jun 14 01:27:25 [conn5] end connection 10.255.119.66:54685 (11 connections now open)
m31102| Thu Jun 14 01:27:25 [conn20] end connection 10.255.119.66:54836 (10 connections now open)
m31102| Thu Jun 14 01:27:25 [conn9] end connection 10.255.119.66:54709 (10 connections now open)
m31200| Thu Jun 14 01:27:25 [conn8] end connection 10.255.119.66:34918 (15 connections now open)
m31200| Thu Jun 14 01:27:25 [conn20] end connection 10.255.119.66:34955 (14 connections now open)
m31201| Thu Jun 14 01:27:25 [conn12] end connection 10.255.119.66:47342 (7 connections now open)
m31100| Thu Jun 14 01:27:25 [conn6] end connection 10.255.119.66:54742 (21 connections now open)
m31100| Thu Jun 14 01:27:25 [conn8] end connection 10.255.119.66:54748 (20 connections now open)
m31100| Thu Jun 14 01:27:25 [conn46] end connection 10.255.119.66:54893 (20 connections now open)
m31100| Thu Jun 14 01:27:25 [conn20] end connection 10.255.119.66:54766 (18 connections now open)
m31100| Thu Jun 14 01:27:25 [conn37] end connection 10.255.119.66:54834 (17 connections now open)
m31200| Thu Jun 14 01:27:25 [conn17] end connection 10.255.119.66:34951 (13 connections now open)
m31202| Thu Jun 14 01:27:25 [conn5] end connection 10.255.119.66:58047 (8 connections now open)
m31202| Thu Jun 14 01:27:25 [conn11] end connection 10.255.119.66:58090 (7 connections now open)
m31100| Thu Jun 14 01:27:25 [conn31] end connection 10.255.119.66:54779 (16 connections now open)
m31100| Thu Jun 14 01:27:25 [conn9] end connection 10.255.119.66:54749 (15 connections now open)
m31101| Thu Jun 14 01:27:25 [conn5] end connection 10.255.119.66:46062 (10 connections now open)
m31101| Thu Jun 14 01:27:25 [conn10] end connection 10.255.119.66:46086 (9 connections now open)
m31100| Thu Jun 14 01:27:26 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:27:26 [interruptThread] now exiting
m31100| Thu Jun 14 01:27:26 dbexit:
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:27:26 [interruptThread] closing listening socket: 28
m31100| Thu Jun 14 01:27:26 [interruptThread] closing listening socket: 30
m31100| Thu Jun 14 01:27:26 [interruptThread] closing listening socket: 32
m31100| Thu Jun 14 01:27:26 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:27:26 [conn1] end connection 10.255.119.66:47635 (14 connections now open)
m31102| Thu Jun 14 01:27:26 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31200| Thu Jun 14 01:27:26 [conn14] end connection 10.255.119.66:34930 (12 connections now open)
m31202| Thu Jun 14 01:27:26 [conn9] end connection 10.255.119.66:58065 (6 connections now open)
m31102| Thu Jun 14 01:27:26 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:27:26 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:27:26 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31201| Thu Jun 14 01:27:26 [conn9] end connection 10.255.119.66:47317 (6 connections now open)
m31202| Thu Jun 14 01:27:26 [conn10] end connection 10.255.119.66:58068 (5 connections now open)
m31201| Thu Jun 14 01:27:26 [conn10] end connection 10.255.119.66:47320 (5 connections now open)
m31200| Thu Jun 14 01:27:26 [conn15] end connection 10.255.119.66:34933 (11 connections now open)
m31200| Thu Jun 14 01:27:26 [conn16] end connection 10.255.119.66:34936 (10 connections now open)
m31200| Thu Jun 14 01:27:26 [conn38] end connection 10.255.119.66:34997 (10 connections now open)
m31101| Thu Jun 14 01:27:26 [conn19] end connection 10.255.119.66:46201 (8 connections now open)
m31102| Thu Jun 14 01:27:26 [conn18] end connection 10.255.119.66:54827 (8 connections now open)
m31101| Thu Jun 14 01:27:26 [conn18] end connection 10.255.119.66:46198 (7 connections now open)
m31100| Thu Jun 14 01:27:26 [conn43] end connection 10.255.119.66:54878 (14 connections now open)
m31101| Thu Jun 14 01:27:26 [conn17] end connection 10.255.119.66:46188 (6 connections now open)
m31102| Thu Jun 14 01:27:26 [conn16] end connection 10.255.119.66:54821 (7 connections now open)
m31102| Thu Jun 14 01:27:26 [conn17] end connection 10.255.119.66:54824 (6 connections now open)
m31100| Thu Jun 14 01:27:26 [conn45] end connection 10.255.119.66:54884 (14 connections now open)
m31100| Thu Jun 14 01:27:26 [conn44] end connection 10.255.119.66:54881 (11 connections now open)
m31100| Thu Jun 14 01:27:26 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:27:26 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:27:26 dbexit: really exiting now
m31101| Thu Jun 14 01:27:27 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:27:27 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:27:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:27:27 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31102| Thu Jun 14 01:27:27 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:27:27 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:27:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:27:27 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:27:27 [interruptThread] now exiting
m31101| Thu Jun 14 01:27:27 dbexit:
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:27:27 [interruptThread] closing listening socket: 33
m31101| Thu Jun 14 01:27:27 [interruptThread] closing listening socket: 34
m31101| Thu Jun 14 01:27:27 [interruptThread] closing listening socket: 35
m31101| Thu Jun 14 01:27:27 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:27:27 [conn19] end connection 10.255.119.66:54828 (5 connections now open)
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:27:27 [conn1] end connection 10.255.119.66:40392 (5 connections now open)
m31101| Thu Jun 14 01:27:27 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:27:27 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:27:27 dbexit: really exiting now
m31102| Thu Jun 14 01:27:28 [MultiCommandJob] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:27:28 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31102| Thu Jun 14 01:27:28 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:27:28 [interruptThread] now exiting
m31102| Thu Jun 14 01:27:28 dbexit:
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:27:28 [interruptThread] closing listening socket: 36
m31102| Thu Jun 14 01:27:28 [interruptThread] closing listening socket: 37
m31102| Thu Jun 14 01:27:28 [interruptThread] closing listening socket: 38
m31102| Thu Jun 14 01:27:28 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:27:28 [conn1] end connection 10.255.119.66:45870 (4 connections now open)
m31102| Thu Jun 14 01:27:28 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:27:28 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:27:28 dbexit: really exiting now
m31200| Thu Jun 14 01:27:29 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Thu Jun 14 01:27:29 [interruptThread] now exiting
m31200| Thu Jun 14 01:27:29 dbexit:
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: going to close listening sockets...
m31200| Thu Jun 14 01:27:29 [interruptThread] closing listening socket: 38
m31200| Thu Jun 14 01:27:29 [interruptThread] closing listening socket: 39
m31200| Thu Jun 14 01:27:29 [interruptThread] closing listening socket: 41
m31200| Thu Jun 14 01:27:29 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: going to flush diaglog...
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: going to close sockets...
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: closing all files...
m31200| Thu Jun 14 01:27:29 [conn1] end connection 10.255.119.66:34894 (8 connections now open)
m31201| Thu Jun 14 01:27:29 [conn14] end connection 10.255.119.66:47370 (4 connections now open)
m31202| Thu Jun 14 01:27:29 [conn14] end connection 10.255.119.66:58133 (4 connections now open)
m31201| Thu Jun 14 01:27:29 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31200
m31202| Thu Jun 14 01:27:29 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:27:29 [interruptThread] closeAllFiles() finished
m31200| Thu Jun 14 01:27:29 [interruptThread] shutdown: removing fs lock...
m31200| Thu Jun 14 01:27:29 dbexit: really exiting now
m31202| Thu Jun 14 01:27:30 [rsHealthPoll] DBClientCursor::init call() failed
m31202| Thu Jun 14 01:27:30 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31200 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31200 ns: admin.$cmd query: { replSetHeartbeat: "d2", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31202" }
m31202| Thu Jun 14 01:27:30 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state DOWN
m31202| Thu Jun 14 01:27:30 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31201 would veto
m31201| Thu Jun 14 01:27:30 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Thu Jun 14 01:27:30 [interruptThread] now exiting
m31201| Thu Jun 14 01:27:30 dbexit:
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: going to close listening sockets...
m31201| Thu Jun 14 01:27:30 [interruptThread] closing listening socket: 42
m31201| Thu Jun 14 01:27:30 [interruptThread] closing listening socket: 43
m31201| Thu Jun 14 01:27:30 [interruptThread] closing listening socket: 44
m31201| Thu Jun 14 01:27:30 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: going to flush diaglog...
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: going to close sockets...
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: closing all files...
m31201| Thu Jun 14 01:27:30 [conn1] end connection 10.255.119.66:47283 (3 connections now open)
m31202| Thu Jun 14 01:27:30 [conn15] end connection 10.255.119.66:58135 (3 connections now open)
m31201| Thu Jun 14 01:27:30 [interruptThread] closeAllFiles() finished
m31201| Thu Jun 14 01:27:30 [interruptThread] shutdown: removing fs lock...
m31201| Thu Jun 14 01:27:30 dbexit: really exiting now
m31202| Thu Jun 14 01:27:31 got signal 15 (Terminated), will terminate after current cmd ends
m31202| Thu Jun 14 01:27:31 [interruptThread] now exiting
m31202| Thu Jun 14 01:27:31 dbexit:
m31202| Thu Jun 14 01:27:31 [interruptThread] shutdown: going to close listening sockets...
m31202| Thu Jun 14 01:27:31 [interruptThread] closing listening socket: 45
m31202| Thu Jun 14 01:27:31 [interruptThread] closing listening socket: 46
m31202| Thu Jun 14 01:27:31 [interruptThread] closing listening socket: 47
m31202| Thu Jun 14 01:27:31 [interruptThread] removing socket file: /tmp/mongodb-31202.sock
m31202| Thu Jun 14 01:27:31 [interruptThread] shutdown: going to flush diaglog...
m31202| Thu Jun 14 01:27:31 [interruptThread] shutdown: going to close sockets...
m31202| Thu Jun 14 144139.199972ms
Thu Jun 14 01:27:32 [initandlisten] connection accepted from 127.0.0.1:58919 #8 (7 connections now open)
*******************************************
Test : auth_add_shard.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/auth_add_shard.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/auth_add_shard.js";TestData.testFile = "auth_add_shard.js";TestData.testName = "auth_add_shard";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:27:32 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auth_add_shard10'
Thu Jun 14 01:27:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/auth_add_shard10 --keyFile jstests/libs/key1
m30000| Thu Jun 14 01:27:32
m30000| Thu Jun 14 01:27:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:27:32
m30000| Thu Jun 14 01:27:32 [initandlisten] MongoDB starting : pid=22156 port=30000 dbpath=/data/db/auth_add_shard10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:27:32 [initandlisten]
m30000| Thu Jun 14 01:27:32 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:27:32 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:27:32 [initandlisten]
m30000| Thu Jun 14 01:27:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:27:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:27:32 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:27:32 [initandlisten]
m30000| Thu Jun 14 01:27:32 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:27:32 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:27:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:27:32 [initandlisten] options: { dbpath: "/data/db/auth_add_shard10", keyFile: "jstests/libs/key1", port: 30000 }
m30000| Thu Jun 14 01:27:32 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:27:32 [websvr] admin web console waiting for connections on port 31000
"localhost:30000"
ShardingTest auth_add_shard1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
Thu Jun 14 01:27:33 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v --keyFile jstests/libs/key1
m30000| Thu Jun 14 01:27:33 [initandlisten] connection accepted from 127.0.0.1:38920 #1 (1 connection now open)
m30000| Thu Jun 14 01:27:33 [initandlisten] connection accepted from 127.0.0.1:38921 #2 (2 connections now open)
m30000| Thu Jun 14 01:27:33 [conn1] note: no users configured in admin.system.users, allowing localhost access
m30000| Thu Jun 14 01:27:33 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:27:33 [FileAllocator] creating directory /data/db/auth_add_shard10/_tmp
m30999| Thu Jun 14 01:27:33 security key: foopdedoop
m30999| Thu Jun 14 01:27:33 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:27:33 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22169 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:27:33 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:27:33 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:27:33 [mongosMain] options: { configdb: "localhost:30000", keyFile: "jstests/libs/key1", port: 30999, verbose: true }
m30999| Thu Jun 14 01:27:33 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:27:33 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:33 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:33 [mongosMain] connected connection!
m30000| Thu Jun 14 01:27:33 [initandlisten] connection accepted from 127.0.0.1:38923 #3 (3 connections now open)
m30000| Thu Jun 14 01:27:33 [conn3] authenticate db: local { authenticate: 1, nonce: "ab9ba3ba3a330312", user: "__system", key: "6570a6775d61bcf0f0a05fa44ddcb9af" }
m30000| Thu Jun 14 01:27:33 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.ns, size: 16MB, took 0.481 secs
m30000| Thu Jun 14 01:27:33 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:27:34 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.0, size: 16MB, took 0.446 secs
m30999| Thu Jun 14 01:27:34 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:34 [mongosMain] connected connection!
m30999| Thu Jun 14 01:27:34 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:27:34 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:27:34 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:27:34 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:27:34 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:27:34 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:27:34 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:27:34 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:27:34
m30999| Thu Jun 14 01:27:34 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:27:34 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:34 [Balancer] connected connection!
m30999| Thu Jun 14 01:27:34 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:27:34 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:27:34 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651654:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:27:34 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:27:34 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651654:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:27:34 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:27:34 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn2] insert config.settings keyUpdates:0 locks(micros) r:25 w:939671 939ms
m30000| Thu Jun 14 01:27:34 [initandlisten] connection accepted from 127.0.0.1:38928 #4 (4 connections now open)
m30000| Thu Jun 14 01:27:34 [conn4] authenticate db: local { authenticate: 1, nonce: "8f90853b5e75f051", user: "__system", key: "8e92c3a404dbb99727e337cf8713416c" }
m30000| Thu Jun 14 01:27:34 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:27:34 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:27:34 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [initandlisten] connection accepted from 127.0.0.1:38929 #5 (5 connections now open)
m30000| Thu Jun 14 01:27:34 [conn5] authenticate db: local { authenticate: 1, nonce: "183827a7cfd9f8f3", user: "__system", key: "bc8b75c5b140d3109acc2cad57bbdec4" }
m30000| Thu Jun 14 01:27:34 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:34 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:27:34 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:27:34 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd976467b9c101a89bdfac3" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:27:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' acquired, ts : 4fd976467b9c101a89bdfac3
m30999| Thu Jun 14 01:27:34 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:27:34 [Balancer] no collections to balance
m30999| Thu Jun 14 01:27:34 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:27:34 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:27:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' unlocked.
m30999| Thu Jun 14 01:27:34 [mongosMain] connection accepted from 127.0.0.1:51017 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:27:34 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:27:34 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:27:34 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:27:34 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:27:34 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:27:34 [initandlisten] connection accepted from 127.0.0.1:38931 #6 (6 connections now open)
m30999| Thu Jun 14 01:27:34 [conn] connected connection!
m30000| Thu Jun 14 01:27:34 [conn6] authenticate db: local { authenticate: 1, nonce: "af17811355ff3cca", user: "__system", key: "8c31b7cc9271b42732c6d3b402833842" }
m30999| Thu Jun 14 01:27:34 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976467b9c101a89bdfac2
m30999| Thu Jun 14 01:27:34 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:27:34 [conn] note: no users configured in admin.system.users, allowing localhost access
m30999| Thu Jun 14 01:27:34 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30999| Thu Jun 14 01:27:34 BackgroundJob starting: WriteBackListener-localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
1 shard system setup
adding user
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd97646804486c69b98cbb2")
}
m30000| Thu Jun 14 01:27:35 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.1, size: 32MB, took 0.93 secs
m30000| Thu Jun 14 01:27:35 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.ns, filling with zeroes...
m30000| Thu Jun 14 01:27:35 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.ns, size: 16MB, took 0.521 secs
m30000| Thu Jun 14 01:27:35 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.0, filling with zeroes...
m30000| Thu Jun 14 01:27:35 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.0, size: 16MB, took 0.437 secs
m30000| Thu Jun 14 01:27:35 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.1, filling with zeroes...
m30000| Thu Jun 14 01:27:35 [conn6] build index admin.system.users { _id: 1 }
m30000| Thu Jun 14 01:27:35 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:35 [conn6] insert admin.system.users keyUpdates:0 locks(micros) W:65 r:696 w:1799145 1799ms
m30999| Thu Jun 14 01:27:35 [conn] authenticate db: admin { authenticate: 1, nonce: "d29ba1d8867e7abf", user: "foo", key: "2786a49739994ccf0537e4747facfb8b" }
1
Resetting db path '/data/db/mongod-27000'
m30000| Thu Jun 14 01:27:37 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.1, size: 32MB, took 1.203 secs
Thu Jun 14 01:27:37 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 27000 --dbpath /data/db/mongod-27000
m27000| Thu Jun 14 01:27:37
m27000| Thu Jun 14 01:27:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m27000| Thu Jun 14 01:27:37
m27000| Thu Jun 14 01:27:37 [initandlisten] MongoDB starting : pid=22195 port=27000 dbpath=/data/db/mongod-27000 32-bit host=domU-12-31-39-01-70-B4
m27000| Thu Jun 14 01:27:37 [initandlisten]
m27000| Thu Jun 14 01:27:37 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m27000| Thu Jun 14 01:27:37 [initandlisten] ** Not recommended for production.
m27000| Thu Jun 14 01:27:37 [initandlisten]
m27000| Thu Jun 14 01:27:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m27000| Thu Jun 14 01:27:37 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m27000| Thu Jun 14 01:27:37 [initandlisten] ** with --journal, the limit is lower
m27000| Thu Jun 14 01:27:37 [initandlisten]
m27000| Thu Jun 14 01:27:37 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m27000| Thu Jun 14 01:27:37 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m27000| Thu Jun 14 01:27:37 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m27000| Thu Jun 14 01:27:37 [initandlisten] options: { dbpath: "/data/db/mongod-27000", port: 27000 }
m27000| Thu Jun 14 01:27:37 [websvr] admin web console waiting for connections on port 28000
m27000| Thu Jun 14 01:27:37 [initandlisten] waiting for connections on port 27000
connection to localhost:27000
m27000| Thu Jun 14 01:27:37 [initandlisten] connection accepted from 127.0.0.1:46514 #1 (1 connection now open)
m30999| Thu Jun 14 01:27:37 [conn] creating new connection to:localhost:27000
m30999| Thu Jun 14 01:27:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:37 [conn] connected connection!
m27000| Thu Jun 14 01:27:37 [initandlisten] connection accepted from 127.0.0.1:46515 #2 (2 connections now open)
m27000| Thu Jun 14 01:27:37 [conn2] authenticate db: local { authenticate: 1, nonce: "b3f74740ada6fa61", user: "__system", key: "45fef751e8f781eb7be6a7477264fc8f" }
m30999| Thu Jun 14 01:27:37 [conn] User Assertion: 15847:can't authenticate to shard server
m30999| Thu Jun 14 01:27:37 [conn] addshard request { addShard: "localhost:27000" } failed: couldn't connect to new shard can't authenticate to shard server
{
"ok" : 0,
"errmsg" : "couldn't connect to new shard can't authenticate to shard server"
}
m27000| Thu Jun 14 01:27:37 [conn2] end connection 127.0.0.1:46515 (1 connection now open)
m27000| Thu Jun 14 01:27:37 got signal 15 (Terminated), will terminate after current cmd ends
m27000| Thu Jun 14 01:27:37 [interruptThread] now exiting
m27000| Thu Jun 14 01:27:37 dbexit:
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: going to close listening sockets...
m27000| Thu Jun 14 01:27:37 [interruptThread] closing listening socket: 23
m27000| Thu Jun 14 01:27:37 [interruptThread] closing listening socket: 24
m27000| Thu Jun 14 01:27:37 [interruptThread] closing listening socket: 25
m27000| Thu Jun 14 01:27:37 [interruptThread] removing socket file: /tmp/mongodb-27000.sock
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: going to flush diaglog...
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: going to close sockets...
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: waiting for fs preallocator...
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: closing all files...
m27000| Thu Jun 14 01:27:37 [interruptThread] closeAllFiles() finished
m27000| Thu Jun 14 01:27:37 [interruptThread] shutdown: removing fs lock...
m27000| Thu Jun 14 01:27:37 dbexit: really exiting now
Thu Jun 14 01:27:38 shell: stopped mongo program on port 27000
Resetting db path '/data/db/mongod-27000'
Thu Jun 14 01:27:38 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --keyFile jstests/libs/key1 --port 27000 --dbpath /data/db/mongod-27000
m27000| Thu Jun 14 01:27:38
m27000| Thu Jun 14 01:27:38 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m27000| Thu Jun 14 01:27:38
m27000| Thu Jun 14 01:27:38 [initandlisten] MongoDB starting : pid=22211 port=27000 dbpath=/data/db/mongod-27000 32-bit host=domU-12-31-39-01-70-B4
m27000| Thu Jun 14 01:27:38 [initandlisten]
m27000| Thu Jun 14 01:27:38 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m27000| Thu Jun 14 01:27:38 [initandlisten] ** Not recommended for production.
m27000| Thu Jun 14 01:27:38 [initandlisten]
m27000| Thu Jun 14 01:27:38 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m27000| Thu Jun 14 01:27:38 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m27000| Thu Jun 14 01:27:38 [initandlisten] ** with --journal, the limit is lower
m27000| Thu Jun 14 01:27:38 [initandlisten]
m27000| Thu Jun 14 01:27:38 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m27000| Thu Jun 14 01:27:38 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m27000| Thu Jun 14 01:27:38 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m27000| Thu Jun 14 01:27:38 [initandlisten] options: { dbpath: "/data/db/mongod-27000", keyFile: "jstests/libs/key1", port: 27000 }
m27000| Thu Jun 14 01:27:38 [initandlisten] waiting for connections on port 27000
m27000| Thu Jun 14 01:27:38 [websvr] admin web console waiting for connections on port 28000
m27000| Thu Jun 14 01:27:38 [initandlisten] connection accepted from 127.0.0.1:46517 #1 (1 connection now open)
m27000| Thu Jun 14 01:27:38 [conn1] note: no users configured in admin.system.users, allowing localhost access
m30999| Thu Jun 14 01:27:38 [conn] creating new connection to:localhost:27000
m30999| Thu Jun 14 01:27:38 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:38 [conn] connected connection!
m27000| Thu Jun 14 01:27:38 [initandlisten] connection accepted from 127.0.0.1:46518 #2 (2 connections now open)
m27000| Thu Jun 14 01:27:38 [conn2] authenticate db: local { authenticate: 1, nonce: "a396d6a1b6c2ff26", user: "__system", key: "936374d2b3a7b312543f124d1378c266" }
m30999| Thu Jun 14 01:27:38 [conn] going to add shard: { _id: "shard0001", host: "localhost:27000" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:27:38 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:27:38 [conn] best shard for new allocation is shard: shard0001:localhost:27000 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:27:38 [conn] put [foo] on: shard0001:localhost:27000
m30999| Thu Jun 14 01:27:38 [conn] enabling sharding on: foo
{ "ok" : 1 }
m30999| Thu Jun 14 01:27:38 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30999| Thu Jun 14 01:27:38 [conn] Moving foo primary from: shard0001:localhost:27000 to: shard0000:localhost:30000
m30999| Thu Jun 14 01:27:38 [conn] created new distributed lock for foo-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:27:38 [conn] inserting initial doc in config.locks for lock foo-movePrimary
m30999| Thu Jun 14 01:27:38 [conn] about to acquire distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651654:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383:conn:1681692777",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:27:38 2012" },
m30999| "why" : "Moving primary shard of foo",
m30999| "ts" : { "$oid" : "4fd9764a7b9c101a89bdfac4" } }
m30999| { "_id" : "foo-movePrimary",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:27:38 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' acquired, ts : 4fd9764a7b9c101a89bdfac4
m27000| Thu Jun 14 01:27:38 [initandlisten] connection accepted from 127.0.0.1:46519 #3 (3 connections now open)
m27000| Thu Jun 14 01:27:38 [conn3] authenticate db: local { authenticate: 1, nonce: "fed939b5911ece33", user: "__system", key: "73bf43b594e2a7fd76267ac3661e0331" }
m30999| Thu Jun 14 01:27:38 [conn] movePrimary dropping database on localhost:27000, no sharded collections in foo
m27000| Thu Jun 14 01:27:38 [conn3] end connection 127.0.0.1:46519 (2 connections now open)
m27000| Thu Jun 14 01:27:38 [conn2] dropDatabase foo
m30999| Thu Jun 14 01:27:38 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' unlocked.
{ "primary " : "shard0000:localhost:30000", "ok" : 1 }
m30999| Thu Jun 14 01:27:38 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:27:38 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:27:38 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:27:38 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd9764a7b9c101a89bdfac5
m30999| Thu Jun 14 01:27:38 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:38 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:38 [conn] connected connection!
m30000| Thu Jun 14 01:27:38 [initandlisten] connection accepted from 127.0.0.1:38939 #7 (7 connections now open)
m30000| Thu Jun 14 01:27:38 [conn7] authenticate db: local { authenticate: 1, nonce: "3b0a974bbb34bab6", user: "__system", key: "81e11b4e58fe9e65d45ed547ee1d8570" }
m27000| Thu Jun 14 01:27:38 [initandlisten] connection accepted from 127.0.0.1:46521 #4 (3 connections now open)
m27000| Thu Jun 14 01:27:38 [conn4] authenticate db: local { authenticate: 1, nonce: "1759acad53d54aeb", user: "__system", key: "e09da7a606cafc8d29d12c341ae2c6a1" }
m30000| Thu Jun 14 01:27:38 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:27:38 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:27:38 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:27:38 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd9764a7b9c101a89bdfac5 based on: (empty)
m30999| Thu Jun 14 01:27:38 [conn] creating new connection to:localhost:27000
m30999| Thu Jun 14 01:27:38 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:38 [conn] connected connection!
m30999| Thu Jun 14 01:27:38 [conn] creating WriteBackListener for: localhost:27000 serverID: 4fd976467b9c101a89bdfac2
m30999| Thu Jun 14 01:27:38 [conn] initializing shard connection to localhost:27000
m30999| Thu Jun 14 01:27:38 BackgroundJob starting: WriteBackListener-localhost:27000
m30999| Thu Jun 14 01:27:38 [conn] resetting shard version of foo.bar on localhost:27000, version is zero
m30999| Thu Jun 14 01:27:38 [conn] setShardVersion shard0001 localhost:27000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0001", shardHost: "localhost:27000" } 0xa5b98a8
m30999| Thu Jun 14 01:27:38 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:27:38 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5b4480
m30000| Thu Jun 14 01:27:39 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.ns, size: 16MB, took 0.432 secs
m30000| Thu Jun 14 01:27:39 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:27:39 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.0, size: 16MB, took 0.288 secs
m30000| Thu Jun 14 01:27:39 [conn7] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:27:39 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:27:39 [conn7] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:27:39 [conn7] insert foo.system.indexes keyUpdates:0 locks(micros) r:18 w:730786 730ms
m30000| Thu Jun 14 01:27:39 [conn6] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:722 w:1799145 reslen:171 728ms
m30000| Thu Jun 14 01:27:39 [conn6] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:27:39 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:27:39 [initandlisten] connection accepted from 127.0.0.1:38941 #8 (8 connections now open)
m30000| Thu Jun 14 01:27:39 [conn8] authenticate db: local { authenticate: 1, nonce: "7f730297e84d916e", user: "__system", key: "f140af23533b4bfa7092d6a67b85b6e5" }
m30000| Thu Jun 14 01:27:39 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:27:39 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
{ "ok" : 1 }
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' acquired, ts : 4fd9764be77889243628e2e6
m30000| Thu Jun 14 01:27:39 [conn5] splitChunk accepted at version 1|0||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:27:39 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:39-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651659435), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') } } }
m30000| Thu Jun 14 01:27:39 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651659:398526569 (sleeping for 30000ms)
m30000| Thu Jun 14 01:27:39 [initandlisten] connection accepted from 127.0.0.1:38942 #9 (9 connections now open)
m30000| Thu Jun 14 01:27:39 [conn9] authenticate db: local { authenticate: 1, nonce: "16d7b968d082e93b", user: "__system", key: "13e0a14e194f5e0e6b24824c46a9cd40" }
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' unlocked.
m30000| Thu Jun 14 01:27:39 [initandlisten] connection accepted from 127.0.0.1:38943 #10 (10 connections now open)
m30000| Thu Jun 14 01:27:39 [conn10] authenticate db: local { authenticate: 1, nonce: "32edd074f809d91b", user: "__system", key: "491de38083c9d3ed2b53234544fb8251" }
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa5b4480
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:27:39 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 4807987 splitThreshold: 921
m30999| Thu Jun 14 01:27:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:27:39 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30999| Thu Jun 14 01:27:39 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd9764a7b9c101a89bdfac5 based on: 1|0||4fd9764a7b9c101a89bdfac5
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5b4480
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ok: 1.0 }
m30999| Thu Jun 14 01:27:39 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 9314141 splitThreshold: 471859
m30999| Thu Jun 14 01:27:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:27:39 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:27:39 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:27:39 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' acquired, ts : 4fd9764be77889243628e2e7
m30000| Thu Jun 14 01:27:39 [conn5] splitChunk accepted at version 1|2||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:27:39 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:39-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651659442), what: "split", ns: "foo.bar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') } } }
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' unlocked.
m30999| Thu Jun 14 01:27:39 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|4||4fd9764a7b9c101a89bdfac5 based on: 1|2||4fd9764a7b9c101a89bdfac5
{ "ok" : 1 }
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5b4480
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ok: 1.0 }
m30999| Thu Jun 14 01:27:39 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey } dataWritten: 7422502 splitThreshold: 11796480
m30999| Thu Jun 14 01:27:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:27:39 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:27:39 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:27:39 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' acquired, ts : 4fd9764be77889243628e2e8
m30000| Thu Jun 14 01:27:39 [conn5] splitChunk accepted at version 1|4||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:27:39 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:39-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651659446), what: "split", ns: "foo.bar", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') } } }
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' unlocked.
m30999| Thu Jun 14 01:27:39 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 1|6||4fd9764a7b9c101a89bdfac5 based on: 1|4||4fd9764a7b9c101a89bdfac5
{ "ok" : 1 }
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), serverID: ObjectId('4fd976467b9c101a89bdfac2'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5b4480
m30999| Thu Jun 14 01:27:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ok: 1.0 }
m30999| Thu Jun 14 01:27:39 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|6||000000000000000000000000 min: { _id: 2.0 } max: { _id: MaxKey } dataWritten: 9462518 splitThreshold: 11796480
m30999| Thu Jun 14 01:27:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:27:39 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|6||000000000000000000000000 min: { _id: 2.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:27:39 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:27:39 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' acquired, ts : 4fd9764be77889243628e2e9
m30000| Thu Jun 14 01:27:39 [conn5] splitChunk accepted at version 1|6||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:27:39 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:39-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651659451), what: "split", ns: "foo.bar", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5') } } }
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' unlocked.
m30999| Thu Jun 14 01:27:39 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 1|8||4fd9764a7b9c101a89bdfac5 based on: 1|6||4fd9764a7b9c101a89bdfac5
{ "ok" : 1 }
m30999| Thu Jun 14 01:27:39 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 1.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:27:39 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|5||000000000000000000000000 min: { _id: 1.0 } max: { _id: 2.0 }) shard0000:localhost:30000 -> shard0001:localhost:27000
m30000| Thu Jun 14 01:27:39 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:27000", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:27:39 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:27:39 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' acquired, ts : 4fd9764be77889243628e2ea
m30000| Thu Jun 14 01:27:39 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:39-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651659453), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:27:39 [conn5] moveChunk request accepted at version 1|8||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:27:39 [conn5] moveChunk number of documents: 1
m30000| Thu Jun 14 01:27:39 [initandlisten] connection accepted from 127.0.0.1:38945 #11 (11 connections now open)
m30000| Thu Jun 14 01:27:39 [conn11] authenticate db: local { authenticate: 1, nonce: "b4ff75174fc9b702", user: "__system", key: "07163ec6ccc229506525c6e56f31d714" }
m27000| Thu Jun 14 01:27:39 [initandlisten] connection accepted from 127.0.0.1:46525 #5 (4 connections now open)
m27000| Thu Jun 14 01:27:39 [conn5] authenticate db: local { authenticate: 1, nonce: "56d6303027dbf068", user: "__system", key: "8b5d614a14a489a85ab3eb343c5a18af" }
m27000| Thu Jun 14 01:27:39 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.ns, filling with zeroes...
m27000| Thu Jun 14 01:27:39 [FileAllocator] creating directory /data/db/mongod-27000/_tmp
m30000| Thu Jun 14 01:27:40 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.1, size: 32MB, took 0.892 secs
m27000| Thu Jun 14 01:27:40 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.ns, size: 16MB, took 0.739 secs
m27000| Thu Jun 14 01:27:40 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:27:40 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m27000| Thu Jun 14 01:27:40 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.0, size: 16MB, took 0.314 secs
m27000| Thu Jun 14 01:27:40 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.1, filling with zeroes...
m27000| Thu Jun 14 01:27:40 [migrateThread] build index foo.bar { _id: 1 }
m27000| Thu Jun 14 01:27:40 [migrateThread] build index done. scanned 0 total records. 0 secs
m27000| Thu Jun 14 01:27:40 [migrateThread] info: creating collection foo.bar on add index
m27000| Thu Jun 14 01:27:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m27000| Thu Jun 14 01:27:41 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.1, size: 32MB, took 0.594 secs
m30000| Thu Jun 14 01:27:41 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:27:41 [conn5] moveChunk setting version to: 2|0||4fd9764a7b9c101a89bdfac5
m27000| Thu Jun 14 01:27:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m27000| Thu Jun 14 01:27:41 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:41-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651661468), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 1192, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 820 } }
m30000| Thu Jun 14 01:27:41 [initandlisten] connection accepted from 127.0.0.1:38946 #12 (12 connections now open)
m30000| Thu Jun 14 01:27:41 [conn12] authenticate db: local { authenticate: 1, nonce: "71f6435b7fa966df", user: "__system", key: "9c2f943f659b5cda3db65c2539a9a70c" }
m30000| Thu Jun 14 01:27:41 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:27:41 [conn5] moveChunk updating self version to: 2|1||4fd9764a7b9c101a89bdfac5 through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30000| Thu Jun 14 01:27:41 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:41-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651661472), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:27:41 [conn5] doing delete inline
m30000| Thu Jun 14 01:27:41 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:27:41 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651659:398526569' unlocked.
m30000| Thu Jun 14 01:27:41 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:27:41-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38929", time: new Date(1339651661473), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2005, step5 of 6: 12, step6 of 6: 0 } }
m30000| Thu Jun 14 01:27:41 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:27000", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2755 w:2621 reslen:37 2020ms
m30999| Thu Jun 14 01:27:41 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:27:41 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 2|1||4fd9764a7b9c101a89bdfac5 based on: 1|8||4fd9764a7b9c101a89bdfac5
{ "millis" : 2021, "ok" : 1 }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:27000" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0000" }
foo.bar chunks:
shard0000 4
shard0001 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0000 Timestamp(2000, 1)
{ "_id" : 0 } -->> { "_id" : 1 } on : shard0000 Timestamp(1000, 3)
{ "_id" : 1 } -->> { "_id" : 2 } on : shard0001 Timestamp(2000, 0)
{ "_id" : 2 } -->> { "_id" : 3 } on : shard0000 Timestamp(1000, 7)
{ "_id" : 3 } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(1000, 8)
m30999| Thu Jun 14 01:27:44 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:27:44 [Balancer] creating new connection to:localhost:27000
m30999| Thu Jun 14 01:27:44 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:44 [Balancer] connected connection!
m27000| Thu Jun 14 01:27:44 [initandlisten] connection accepted from 127.0.0.1:46528 #6 (5 connections now open)
m27000| Thu Jun 14 01:27:44 [conn6] authenticate db: local { authenticate: 1, nonce: "eb3468ff0dd66541", user: "__system", key: "8da36da087906068144f3e41284d2434" }
m30999| Thu Jun 14 01:27:44 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:27:44 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd976507b9c101a89bdfac6" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976467b9c101a89bdfac3" } }
m30999| Thu Jun 14 01:27:44 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' acquired, ts : 4fd976507b9c101a89bdfac6
m30999| Thu Jun 14 01:27:44 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:27:44 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:27:44 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:44 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:27:44 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:27:44 [Balancer] shard0000
m30999| Thu Jun 14 01:27:44 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] shard0001
m30999| Thu Jun 14 01:27:44 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:27:44 [Balancer] ----
m30999| Thu Jun 14 01:27:44 [Balancer] collection : foo.bar
m30999| Thu Jun 14 01:27:44 [Balancer] donor : 4 chunks on shard0000
m30999| Thu Jun 14 01:27:44 [Balancer] receiver : 1 chunks on shard0001
m30999| Thu Jun 14 01:27:44 [Balancer] chose [shard0000] to [shard0001] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30999| Thu Jun 14 01:27:44 [Balancer] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 2|1||4fd9764a7b9c101a89bdfac5 based on: (empty)
m30999| Thu Jun 14 01:27:44 [Balancer] Assertion: 10320:BSONElement: bad type -84
m30999| 0x84f514a 0x8126495 0x83f3537 0x811ddd3 0x835a42a 0x82c3073 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0xae9542 0xd96b6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEv+0x1b3) [0x811ddd3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo12ChunkManager9findChunkERKNS_7BSONObjE+0x18a) [0x835a42a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x613) [0x82c3073]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0xae9542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0xd96b6e]
m30999| Thu Jun 14 01:27:44 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' unlocked.
m30999| Thu Jun 14 01:27:44 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:27:44 [Balancer] caught exception while doing balance: BSONElement: bad type -84
m30999| Thu Jun 14 01:27:44 [Balancer] *** End of balancing round
m30000| Thu Jun 14 01:27:44 [conn5] end connection 127.0.0.1:38929 (11 connections now open)
m30999| Thu Jun 14 01:27:44 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:27:44 [conn] going to start draining shard: shard0001
m30999| primaryLocalDoc: { _id: "local", primary: "shard0001" }
m30999| Thu Jun 14 01:27:44 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:27:44 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:27:44 [conn] connected connection!
m30000| Thu Jun 14 01:27:44 [initandlisten] connection accepted from 127.0.0.1:38948 #13 (12 connections now open)
m30000| Thu Jun 14 01:27:44 [conn13] authenticate db: local { authenticate: 1, nonce: "7f7b486fa1b12910", user: "__system", key: "9417e670d6ff38dcb350ebb677aeb8b7" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0001",
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Thu Jun 14 01:28:04 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:28:04 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651654:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Thu Jun 14 01:28:14 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:28:14 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651654:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:28:14 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9766e7b9c101a89bdfac7" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976507b9c101a89bdfac6" } }
m30999| Thu Jun 14 01:28:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' acquired, ts : 4fd9766e7b9c101a89bdfac7
m30999| Thu Jun 14 01:28:14 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:28:14 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:28:14 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:14 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:14 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:28:14 [Balancer] shard0000
m30999| Thu Jun 14 01:28:14 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:14 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:14 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:14 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:14 [Balancer] shard0001
m30999| Thu Jun 14 01:28:14 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:14 [Balancer] ----
m30999| Thu Jun 14 01:28:14 [Balancer] collection : foo.bar
m30999| Thu Jun 14 01:28:14 [Balancer] donor : 4 chunks on shard0000
m30999| Thu Jun 14 01:28:14 [Balancer] receiver : 4 chunks on shard0000
m30999| Thu Jun 14 01:28:14 [Balancer] draining : 1(1)
m30999| Thu Jun 14 01:28:14 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd9764a7b9c101a89bdfac5'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:14 [Balancer] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:27000 lastmod: 2|0||000000000000000000000000 min: { _id: 1.0 } max: { _id: 2.0 }) shard0001:localhost:27000 -> shard0000:localhost:30000
m27000| Thu Jun 14 01:28:14 [conn6] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:27000", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m27000| Thu Jun 14 01:28:14 [conn6] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m27000| Thu Jun 14 01:28:14 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:27000:1339651694:472952563 (sleeping for 30000ms)
m27000| Thu Jun 14 01:28:14 [conn6] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:27000:1339651694:472952563' acquired, ts : 4fd9766e2e9c1ae039020355
m27000| Thu Jun 14 01:28:14 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:14-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46528", time: new Date(1339651694085), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } }
m27000| Thu Jun 14 01:28:14 [conn6] no current chunk manager found for this shard, will initialize
m27000| Thu Jun 14 01:28:14 [conn6] moveChunk request accepted at version 2|0||4fd9764a7b9c101a89bdfac5
m27000| Thu Jun 14 01:28:14 [conn6] moveChunk number of documents: 1
m30000| Thu Jun 14 01:28:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m27000| Thu Jun 14 01:28:15 [conn6] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:27000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m27000| Thu Jun 14 01:28:15 [conn6] moveChunk setting version to: 3|0||4fd9764a7b9c101a89bdfac5
m30000| Thu Jun 14 01:28:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m30000| Thu Jun 14 01:28:15 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:15-7", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651695095), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m27000| Thu Jun 14 01:28:15 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:27000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 }
m27000| Thu Jun 14 01:28:15 [conn6] moveChunk moved last chunk out for collection 'foo.bar'
m27000| Thu Jun 14 01:28:15 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:15-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46528", time: new Date(1339651695100), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } }
m27000| Thu Jun 14 01:28:15 [conn6] doing delete inline
m27000| Thu Jun 14 01:28:15 [conn6] moveChunk deleted: 1
m27000| Thu Jun 14 01:28:15 [conn6] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:27000:1339651694:472952563' unlocked.
m27000| Thu Jun 14 01:28:15 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:15-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46528", time: new Date(1339651695101), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 0 } }
m27000| Thu Jun 14 01:28:15 [conn6] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:27000", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:275 w:346 reslen:37 1018ms
m30999| Thu Jun 14 01:28:15 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:15 [Balancer] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 9 version: 3|0||4fd9764a7b9c101a89bdfac5 based on: 2|1||4fd9764a7b9c101a89bdfac5
m30999| Thu Jun 14 01:28:15 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:28:15 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651654:1804289383' unlocked.
m30999| Thu Jun 14 01:28:15 [conn] going to remove shard: shard0001
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0001",
"ok" : 1
}
m27000| Thu Jun 14 01:28:15 got signal 15 (Terminated), will terminate after current cmd ends
m27000| Thu Jun 14 01:28:15 [interruptThread] now exiting
m27000| Thu Jun 14 01:28:15 dbexit:
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: going to close listening sockets...
m27000| Thu Jun 14 01:28:15 [interruptThread] closing listening socket: 26
m27000| Thu Jun 14 01:28:15 [interruptThread] closing listening socket: 27
m27000| Thu Jun 14 01:28:15 [interruptThread] closing listening socket: 28
m27000| Thu Jun 14 01:28:15 [interruptThread] removing socket file: /tmp/mongodb-27000.sock
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: going to flush diaglog...
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: going to close sockets...
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: waiting for fs preallocator...
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: closing all files...
m27000| Thu Jun 14 01:28:15 [interruptThread] closeAllFiles() finished
m30999| Thu Jun 14 01:28:15 [WriteBackListener-localhost:27000] SocketException: remote: 127.0.0.1:27000 error: 9001 socket exception [0] server [127.0.0.1:27000]
m30999| Thu Jun 14 01:28:15 [WriteBackListener-localhost:27000] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:28:15 [WriteBackListener-localhost:27000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd976467b9c101a89bdfac2') }
m30999| Thu Jun 14 01:28:15 [WriteBackListener-localhost:27000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd976467b9c101a89bdfac2') }
m30000| Thu Jun 14 01:28:15 [conn11] end connection 127.0.0.1:38945 (11 connections now open)
m30000| Thu Jun 14 01:28:15 [conn12] end connection 127.0.0.1:38946 (10 connections now open)
m27000| Thu Jun 14 01:28:15 [interruptThread] shutdown: removing fs lock...
m27000| Thu Jun 14 01:28:15 dbexit: really exiting now
Thu Jun 14 01:28:16 shell: stopped mongo program on port 27000
m30999| Thu Jun 14 01:28:16 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:28:16 [conn3] end connection 127.0.0.1:38923 (9 connections now open)
m30000| Thu Jun 14 01:28:16 [conn13] end connection 127.0.0.1:38948 (8 connections now open)
m30000| Thu Jun 14 01:28:16 [conn6] end connection 127.0.0.1:38931 (7 connections now open)
m30000| Thu Jun 14 01:28:16 [conn7] end connection 127.0.0.1:38939 (6 connections now open)
Thu Jun 14 01:28:17 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:28:17 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:28:17 [interruptThread] now exiting
m30000| Thu Jun 14 01:28:17 dbexit:
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:28:17 [interruptThread] closing listening socket: 17
m30000| Thu Jun 14 01:28:17 [interruptThread] closing listening socket: 18
m30000| Thu Jun 14 01:28:17 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:28:17 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:28:17 [conn8] end connection 127.0.0.1:38941 (5 connections nowThu Jun 14 01:28:17 [clientcursormon] mem (MB) res:16 virt:114 mapped:0
open)
m30000| Thu Jun 14 01:28:17 [conn10] end connection 127.0.0.1:38943 (5 connections now open)
m30000| Thu Jun 14 01:28:17 [conn9] end connection 127.0.0.1:38942 (5 connections now open)
m30000| Thu Jun 14 01:28:17 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:28:17 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:28:17 dbexit: really exiting now
Thu Jun 14 01:28:18 shell: stopped mongo program on port 30000
*** ShardingTest auth_add_shard1 completed successfully in 45.308 seconds ***
45395.049095ms
Thu Jun 14 01:28:18 [initandlisten] connection accepted from 127.0.0.1:58950 #9 (8 connections now open)
*******************************************
Test : auto1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/auto1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/auto1.js";TestData.testFile = "auto1.js";TestData.testName = "auto1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:28:18 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auto10'
Thu Jun 14 01:28:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/auto10
m30000| Thu Jun 14 01:28:18
m30000| Thu Jun 14 01:28:18 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:28:18
m30000| Thu Jun 14 01:28:18 [initandlisten] MongoDB starting : pid=22257 port=30000 dbpath=/data/db/auto10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:28:18 [initandlisten]
m30000| Thu Jun 14 01:28:18 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:28:18 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:28:18 [initandlisten]
m30000| Thu Jun 14 01:28:18 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:28:18 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:28:18 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:28:18 [initandlisten]
m30000| Thu Jun 14 01:28:18 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:28:18 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:28:18 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:28:18 [initandlisten] options: { dbpath: "/data/db/auto10", port: 30000 }
m30000| Thu Jun 14 01:28:18 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:28:18 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/auto11'
m30000| Thu Jun 14 01:28:18 [initandlisten] connection accepted from 127.0.0.1:38951 #1 (1 connection now open)
Thu Jun 14 01:28:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/auto11
m30001| Thu Jun 14 01:28:18
m30001| Thu Jun 14 01:28:18 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:28:18
m30001| Thu Jun 14 01:28:18 [initandlisten] MongoDB starting : pid=22270 port=30001 dbpath=/data/db/auto11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:28:18 [initandlisten]
m30001| Thu Jun 14 01:28:18 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:28:18 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:28:18 [initandlisten]
m30001| Thu Jun 14 01:28:18 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:28:18 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:28:18 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:28:18 [initandlisten]
m30001| Thu Jun 14 01:28:18 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:28:18 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:28:18 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:28:18 [initandlisten] options: { dbpath: "/data/db/auto11", port: 30001 }
m30001| Thu Jun 14 01:28:18 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:28:18 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:28:18 [initandlisten] connection accepted from 127.0.0.1:58843 #1 (1 connection now open)
ShardingTest auto1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:28:18 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:28:18 [initandlisten] connection accepted from 127.0.0.1:38954 #2 (2 connections now open)
m30000| Thu Jun 14 01:28:18 [FileAllocator] allocating new datafile /data/db/auto10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:28:18 [FileAllocator] creating directory /data/db/auto10/_tmp
m30999| Thu Jun 14 01:28:18 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:28:18 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22284 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:28:18 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:28:18 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:28:18 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:28:18 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:28:18 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:18 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:18 [initandlisten] connection accepted from 127.0.0.1:38956 #3 (3 connections now open)
m30999| Thu Jun 14 01:28:18 [mongosMain] connected connection!
m30000| Thu Jun 14 01:28:18 [FileAllocator] done allocating datafile /data/db/auto10/config.ns, size: 16MB, took 0.251 secs
m30000| Thu Jun 14 01:28:18 [FileAllocator] allocating new datafile /data/db/auto10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:28:19 [FileAllocator] done allocating datafile /data/db/auto10/config.0, size: 16MB, took 0.265 secs
m30000| Thu Jun 14 01:28:19 [FileAllocator] allocating new datafile /data/db/auto10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:28:19 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [conn2] insert config.settings keyUpdates:0 locks(micros) w:535699 535ms
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:28:19 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:38959 #4 (4 connections now open)
m30999| Thu Jun 14 01:28:19 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:28:19 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:19 [mongosMain] connected connection!
m30000| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:38960 #5 (5 connections now open)
m30000| Thu Jun 14 01:28:19 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:28:19 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:28:19 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:28:19 [conn3] build index config.chunks { _id: 1 }
m30999| Thu Jun 14 01:28:19 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:28:19 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:28:19 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:28:19 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:28:19 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:28:19
m30999| Thu Jun 14 01:28:19 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:19 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:28:19 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:38961 #6 (6 connections now open)
m30000| Thu Jun 14 01:28:19 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30999| Thu Jun 14 01:28:19 [Balancer] connected connection!
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:19 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:28:19 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:28:19 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:28:19 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651699:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651699:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651699:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:28:19 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9767369ab01b9a70406ed" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:28:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651699:1804289383' acquired, ts : 4fd9767369ab01b9a70406ed
m30999| Thu Jun 14 01:28:19 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:28:19 [Balancer] no collections to balance
m30999| Thu Jun 14 01:28:19 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:28:19 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:28:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651699:1804289383' unlocked.
m30000| Thu Jun 14 01:28:19 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651699:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:28:19 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:28:19 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651699:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:28:19 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:28:19 [mongosMain] connection accepted from 127.0.0.1:51049 #1 (1 connection now open)
m30999| Thu Jun 14 01:28:19 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:28:19 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:28:19 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:28:19 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:19 [conn] connected connection!
m30999| Thu Jun 14 01:28:19 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:28:19 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:28:19 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:28:19 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:19 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:28:19 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { num: 1.0 } }
m30999| Thu Jun 14 01:28:19 [conn] enable sharding on: test.foo with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:28:19 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:19 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd9767369ab01b9a70406ee based on: (empty)
m30000| Thu Jun 14 01:28:19 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:28:19 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:19 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:38964 #7 (7 connections now open)
m30999| Thu Jun 14 01:28:19 [conn] connected connection!
m30999| Thu Jun 14 01:28:19 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9767369ab01b9a70406ec
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:28:19 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:28:19 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:28:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:19 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:28:19 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:19 [conn] connected connection!
m30999| Thu Jun 14 01:28:19 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9767369ab01b9a70406ec
m30999| Thu Jun 14 01:28:19 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:28:19 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:58853 #2 (2 connections now open)
m30001| Thu Jun 14 01:28:19 [FileAllocator] allocating new datafile /data/db/auto11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:28:19 [FileAllocator] creating directory /data/db/auto11/_tmp
m30001| Thu Jun 14 01:28:19 [initandlisten] connection accepted from 127.0.0.1:58855 #3 (3 connections now open)
m30000| Thu Jun 14 01:28:19 [FileAllocator] done allocating datafile /data/db/auto10/config.1, size: 32MB, took 0.659 secs
m30001| Thu Jun 14 01:28:20 [FileAllocator] done allocating datafile /data/db/auto11/test.ns, size: 16MB, took 0.336 secs
m30001| Thu Jun 14 01:28:20 [FileAllocator] allocating new datafile /data/db/auto11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:28:20 [FileAllocator] done allocating datafile /data/db/auto11/test.0, size: 16MB, took 0.248 secs
m30001| Thu Jun 14 01:28:20 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:28:20 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:28:20 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:28:20 [conn2] build index test.foo { num: 1.0 }
m30001| Thu Jun 14 01:28:20 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:28:20 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:6 W:58 r:264 w:1167176 1167ms
m30001| Thu Jun 14 01:28:20 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd9767369ab01b9a70406ec'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:77 reslen:51 1164ms
m30001| Thu Jun 14 01:28:20 [FileAllocator] allocating new datafile /data/db/auto11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:28:20 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:28:20 [initandlisten] connection accepted from 127.0.0.1:38966 #8 (8 connections now open)
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 7447688 splitThreshold: 921
m30999| Thu Jun 14 01:28:20 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:20 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:20 [conn] connected connection!
m30001| Thu Jun 14 01:28:20 [initandlisten] connection accepted from 127.0.0.1:58857 #4 (4 connections now open)
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split { num: 1.0 }
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921
m30000| Thu Jun 14 01:28:20 [initandlisten] connection accepted from 127.0.0.1:38968 #9 (9 connections now open)
m30000| Thu Jun 14 01:28:20 [initandlisten] connection accepted from 127.0.0.1:38969 #10 (10 connections now open)
m30999| Thu Jun 14 01:28:20 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd9767369ab01b9a70406ee based on: 1|0||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } on: { num: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 3547427 splitThreshold: 471859
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:20 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd9767369ab01b9a70406ee based on: 1|2||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } on: { num: 11.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:28:20 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:28:20 [conn] recently split chunk: { min: { num: 11.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:20 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 8364002 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:20 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:20 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:20 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:20 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:20 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 0.0 } ], shardId: "test.foo-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:20 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976741a349e6162f9a583
m30001| Thu Jun 14 01:28:20 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651700:1911514757 (sleeping for 30000ms)
m30001| Thu Jun 14 01:28:20 [conn4] splitChunk accepted at version 1|0||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:20-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651700556), what: "split", ns: "test.foo", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:20 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:20 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 0.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 11.0 } ], shardId: "test.foo-num_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:20 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976741a349e6162f9a584
m30001| Thu Jun 14 01:28:20 [conn4] splitChunk accepted at version 1|2||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:20-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651700565), what: "split", ns: "test.foo", details: { before: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 0.0 }, max: { num: 11.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 11.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
ShardingTest test.foo-num_MinKey 1000|1 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0001 test.foo
test.foo-num_0.0 1000|3 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 1000|4 { "num" : 11 } -> { "num" : { $maxKey : 1 } } shard0001 test.foo
m30001| Thu Jun 14 01:28:21 [FileAllocator] done allocating datafile /data/db/auto11/test.1, size: 32MB, took 0.634 secs
m30001| Thu Jun 14 01:28:21 [FileAllocator] allocating new datafile /data/db/auto11/test.2, filling with zeroes...
m30001| Thu Jun 14 01:28:21 [conn3] insert test.foo keyUpdates:0 locks(micros) W:99 r:328 w:488134 480ms
datasize: {
"estimate" : false,
"size" : 5128368,
"numObjects" : 100,
"millis" : 36,
"ok" : 1
}
ShardingTest test.foo-num_MinKey 1000|1 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0001 test.foo
test.foo-num_0.0 1000|3 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 1000|4 { "num" : 11 } -> { "num" : { $maxKey : 1 } } shard0001 test.foo
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : { $minKey : 1 } } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : { $minKey : 1 } } -> { "num" : 0 }),({ "num" : 0 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : 0 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 0 } -> { "num" : 11 }),({ "num" : 11 } -> { "num" : { $maxKey : 1 } })
m30999| Thu Jun 14 01:28:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:21 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:21 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:21 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:21 [conn4] request split points lookup for chunk test.foo { : 11.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:21 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 11.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:21 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 11.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 199.0 } ], shardId: "test.foo-num_11.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:21 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:21 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976751a349e6162f9a585
m30001| Thu Jun 14 01:28:21 [conn4] splitChunk accepted at version 1|4||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:21-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651701361), what: "split", ns: "test.foo", details: { before: { min: { num: 11.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 11.0 }, max: { num: 199.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 199.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:21 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:21 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 199.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_199.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:21 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:21 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976751a349e6162f9a586
m30001| Thu Jun 14 01:28:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:21-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651701365), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 199.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:21 [conn4] moveChunk request accepted at version 1|6||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:21 [conn4] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:21 [initandlisten] connection accepted from 127.0.0.1:58860 #5 (5 connections now open)
m30000| Thu Jun 14 01:28:21 [FileAllocator] allocating new datafile /data/db/auto10/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:28:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:21 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:21 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd9767369ab01b9a70406ee based on: 1|4||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } on: { num: 199.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:21 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:28:21 [conn] moving chunk (auto): ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:28:21 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:28:22 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 199.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:22 [FileAllocator] done allocating datafile /data/db/auto11/test.2, size: 64MB, took 1.558 secs
m30000| Thu Jun 14 01:28:22 [FileAllocator] done allocating datafile /data/db/auto10/test.ns, size: 16MB, took 1.372 secs
m30000| Thu Jun 14 01:28:22 [FileAllocator] allocating new datafile /data/db/auto10/test.0, filling with zeroes...
m30000| Thu Jun 14 01:28:23 [FileAllocator] done allocating datafile /data/db/auto10/test.0, size: 16MB, took 0.43 secs
m30000| Thu Jun 14 01:28:23 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:28:23 [FileAllocator] allocating new datafile /data/db/auto10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:28:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:23 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:28:23 [migrateThread] build index test.foo { num: 1.0 }
m30000| Thu Jun 14 01:28:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 199.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:28:23 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 199.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:23 [conn4] moveChunk setting version to: 2|0||4fd9767369ab01b9a70406ee
m30000| Thu Jun 14 01:28:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 199.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:23-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651703384), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 199.0 }, max: { num: MaxKey }, step1 of 5: 1814, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 202 } }
m30000| Thu Jun 14 01:28:23 [initandlisten] connection accepted from 127.0.0.1:38971 #11 (11 connections now open)
m30999| Thu Jun 14 01:28:23 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:23 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 2|1||4fd9767369ab01b9a70406ee based on: 1|6||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { num: 11.0 } max: { num: 199.0 } dataWritten: 3272751 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:23 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 2|3||4fd9767369ab01b9a70406ee based on: 2|1||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { num: 11.0 } max: { num: 199.0 } on: { num: 76.0 } (splitThreshold 13107200)
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|3, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { num: 76.0 } max: { num: 199.0 } dataWritten: 7380950 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split { num: 167.0 }
m30001| Thu Jun 14 01:28:23 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 199.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:23 [conn4] moveChunk updating self version to: 2|1||4fd9767369ab01b9a70406ee through { num: MinKey } -> { num: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:28:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:23-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651703388), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 199.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:23 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:23 [conn4] moveChunk deleted: 1
m30001| Thu Jun 14 01:28:23 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:23-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651703389), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 199.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:28:23 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 199.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_199.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1332 w:447 reslen:37 2025ms
m30001| Thu Jun 14 01:28:23 [conn4] request split points lookup for chunk test.foo { : 11.0 } -->> { : 199.0 }
m30001| Thu Jun 14 01:28:23 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 11.0 } -->> { : 199.0 }
m30001| Thu Jun 14 01:28:23 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 11.0 }, max: { num: 199.0 }, from: "shard0001", splitKeys: [ { num: 76.0 } ], shardId: "test.foo-num_11.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:23 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:23 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976771a349e6162f9a587
m30001| Thu Jun 14 01:28:23 [conn4] splitChunk accepted at version 2|1||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:23-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651703393), what: "split", ns: "test.foo", details: { before: { min: { num: 11.0 }, max: { num: 199.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 11.0 }, max: { num: 76.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 76.0 }, max: { num: 199.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:23 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:23 [conn4] request split points lookup for chunk test.foo { : 76.0 } -->> { : 199.0 }
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { num: 76.0 } max: { num: 199.0 } dataWritten: 2665260 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split { num: 139.0 }
m30001| Thu Jun 14 01:28:23 [conn4] request split points lookup for chunk test.foo { : 76.0 } -->> { : 199.0 }
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:23 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:28:23 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 6144357 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:28:23 [FileAllocator] done allocating datafile /data/db/auto10/test.1, size: 32MB, took 0.764 secs
m30000| Thu Jun 14 01:28:23 [FileAllocator] allocating new datafile /data/db/auto10/test.2, filling with zeroes...
m30000| Thu Jun 14 01:28:23 [conn7] insert test.foo keyUpdates:0 locks(micros) W:97 r:506 w:428573 421ms
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
ShardingTest test.foo-num_MinKey 2000|1 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0001 test.foo
test.foo-num_0.0 1000|3 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 2000|2 { "num" : 11 } -> { "num" : 76 } shard0001 test.foo
test.foo-num_76.0 2000|3 { "num" : 76 } -> { "num" : 199 } shard0001 test.foo
test.foo-num_199.0 2000|0 { "num" : 199 } -> { "num" : { $maxKey : 1 } } shard0000 test.foo
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : { $minKey : 1 } } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : { $minKey : 1 } } -> { "num" : 0 }),({ "num" : 0 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : 0 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 0 } -> { "num" : 11 }),({ "num" : 11 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:21 GMT-0400 (EDT) split test.foo { "num" : 11 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 11 } -> { "num" : 199 }),({ "num" : 199 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:21 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 1814, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 202 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 2006, "step5 of 6" : 16, "step6 of 6" : 0 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) split test.foo { "num" : 11 } -> { "num" : 199 } -->> ({ "num" : 11 } -> { "num" : 76 }),({ "num" : 76 } -> { "num" : 199 })
m30999| Thu Jun 14 01:28:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { num: 11.0 } max: { num: 76.0 } dataWritten: 6130282 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:23 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:23 [conn4] request split points lookup for chunk test.foo { : 11.0 } -->> { : 76.0 }
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { num: 11.0 } max: { num: 76.0 } dataWritten: 2665260 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:24 [conn] chunk not full enough to trigger auto-split { num: 56.0 }
m30001| Thu Jun 14 01:28:24 [conn4] request split points lookup for chunk test.foo { : 11.0 } -->> { : 76.0 }
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { num: 76.0 } max: { num: 199.0 } dataWritten: 2665260 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:24 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 2|5||4fd9767369ab01b9a70406ee based on: 2|3||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { num: 76.0 } max: { num: 199.0 } on: { num: 131.0 } (splitThreshold 13107200)
m30999| Thu Jun 14 01:28:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|5, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:24 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { num: 76.0 } max: { num: 131.0 } dataWritten: 3973102 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:24 [conn] chunk not full enough to trigger auto-split { num: 130.0 }
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { num: 131.0 } max: { num: 199.0 } dataWritten: 2642303 splitThreshold: 13107200
m30999| Thu Jun 14 01:28:24 [conn] chunk not full enough to trigger auto-split { num: 177.0 }
m30999| Thu Jun 14 01:28:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:24 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 6035524 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:24 [conn4] request split points lookup for chunk test.foo { : 76.0 } -->> { : 199.0 }
m30001| Thu Jun 14 01:28:24 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 76.0 } -->> { : 199.0 }
m30001| Thu Jun 14 01:28:24 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 76.0 }, max: { num: 199.0 }, from: "shard0001", splitKeys: [ { num: 131.0 } ], shardId: "test.foo-num_76.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:24 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:24 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd976781a349e6162f9a588
m30001| Thu Jun 14 01:28:24 [conn4] splitChunk accepted at version 2|3||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:24-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651704037), what: "split", ns: "test.foo", details: { before: { min: { num: 76.0 }, max: { num: 199.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 76.0 }, max: { num: 131.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 131.0 }, max: { num: 199.0 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:24 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30999| Thu Jun 14 01:28:24 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:24 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:24 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:24 [conn4] request split points lookup for chunk test.foo { : 76.0 } -->> { : 131.0 }
m30001| Thu Jun 14 01:28:24 [conn4] request split points lookup for chunk test.foo { : 131.0 } -->> { : 199.0 }
m30000| Thu Jun 14 01:28:24 [conn6] request split points lookup for chunk test.foo { : 199.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:24 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 199.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:24 [initandlisten] connection accepted from 127.0.0.1:38972 #12 (12 connections now open)
m30000| Thu Jun 14 01:28:24 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 199.0 }, max: { num: MaxKey }, from: "shard0000", splitKeys: [ { num: 399.0 } ], shardId: "test.foo-num_199.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:24 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:24 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' acquired, ts : 4fd976780d395a748c1ad36a
m30000| Thu Jun 14 01:28:24 [conn6] splitChunk accepted at version 2|0||4fd9767369ab01b9a70406ee
m30000| Thu Jun 14 01:28:24 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:24-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38961", time: new Date(1339651704268), what: "split", ns: "test.foo", details: { before: { min: { num: 199.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 199.0 }, max: { num: 399.0 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 399.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30000| Thu Jun 14 01:28:24 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' unlocked.
m30000| Thu Jun 14 01:28:24 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651704:1945785342 (sleeping for 30000ms)
m30999| Thu Jun 14 01:28:24 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 2|7||4fd9767369ab01b9a70406ee based on: 2|5||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 199.0 } max: { num: MaxKey } on: { num: 399.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:24 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:28:24 [conn] moving chunk (auto): ns:test.foo at: shard0000:localhost:30000 lastmod: 2|7||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } to: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:24 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0000:localhost:30000 lastmod: 2|7||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:28:24 [conn6] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 399.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_399.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:24 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:24 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' acquired, ts : 4fd976780d395a748c1ad36b
m30000| Thu Jun 14 01:28:24 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:24-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38961", time: new Date(1339651704289), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 399.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:24 [conn6] moveChunk request accepted at version 2|7||4fd9767369ab01b9a70406ee
m30000| Thu Jun 14 01:28:24 [conn6] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 399.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:25 [conn6] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 399.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:28:25 [conn6] moveChunk setting version to: 3|0||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 399.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:28:25 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:25-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651705300), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 399.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30000| Thu Jun 14 01:28:25 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { num: 399.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:28:25 [conn6] moveChunk updating self version to: 3|1||4fd9767369ab01b9a70406ee through { num: 199.0 } -> { num: 399.0 } for collection 'test.foo'
m30000| Thu Jun 14 01:28:25 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:25-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38961", time: new Date(1339651705304), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 399.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:25 [conn6] doing delete inline
m30000| Thu Jun 14 01:28:25 [conn6] moveChunk deleted: 1
m30000| Thu Jun 14 01:28:25 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' unlocked.
m30999| Thu Jun 14 01:28:25 [conn] moveChunk result: { ok: 1.0 }
m30000| Thu Jun 14 01:28:25 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:25-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38961", time: new Date(1339651705649), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 399.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 344 } }
m30000| Thu Jun 14 01:28:25 [conn6] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 399.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_399.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:4110 w:344805 reslen:37 1361ms
m30000| Thu Jun 14 01:28:25 [FileAllocator] done allocating datafile /data/db/auto10/test.2, size: 64MB, took 1.702 secs
m30999| Thu Jun 14 01:28:25 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 3|1||4fd9767369ab01b9a70406ee based on: 2|7||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|1||000000000000000000000000 min: { num: 199.0 } max: { num: 399.0 } dataWritten: 3402891 splitThreshold: 13107200
m30000| Thu Jun 14 01:28:25 [conn6] request split points lookup for chunk test.foo { : 199.0 } -->> { : 399.0 }
m30000| Thu Jun 14 01:28:25 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 199.0 } -->> { : 399.0 }
m30000| Thu Jun 14 01:28:25 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 199.0 }, max: { num: 399.0 }, from: "shard0000", splitKeys: [ { num: 262.0 } ], shardId: "test.foo-num_199.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:25 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:25 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' acquired, ts : 4fd976790d395a748c1ad36c
m30000| Thu Jun 14 01:28:25 [conn6] splitChunk accepted at version 3|1||4fd9767369ab01b9a70406ee
m30000| Thu Jun 14 01:28:25 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:25-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38961", time: new Date(1339651705655), what: "split", ns: "test.foo", details: { before: { min: { num: 199.0 }, max: { num: 399.0 }, lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 199.0 }, max: { num: 262.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 262.0 }, max: { num: 399.0 }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30000| Thu Jun 14 01:28:25 [conn6] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651704:1945785342' unlocked.
m30999| Thu Jun 14 01:28:25 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 3|3||4fd9767369ab01b9a70406ee based on: 3|1||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|1||000000000000000000000000 min: { num: 199.0 } max: { num: 399.0 } on: { num: 262.0 } (splitThreshold 13107200)
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|3, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|3||000000000000000000000000 min: { num: 262.0 } max: { num: 399.0 } dataWritten: 2865342 splitThreshold: 13107200
m30000| Thu Jun 14 01:28:25 [conn6] request split points lookup for chunk test.foo { : 262.0 } -->> { : 399.0 }
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split { num: 355.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|3||000000000000000000000000 min: { num: 262.0 } max: { num: 399.0 } dataWritten: 2665260 splitThreshold: 13107200
m30000| Thu Jun 14 01:28:25 [conn6] request split points lookup for chunk test.foo { : 262.0 } -->> { : 399.0 }
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split { num: 325.0 }
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0001", shardHost: "localhost:30001" } 0xa3f5698
m30999| Thu Jun 14 01:28:25 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 8291700 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split { num: 525.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split { num: 525.0 }
m30999| Thu Jun 14 01:28:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:25 [conn] chunk not full enough to trigger auto-split { num: 525.0 }
m30999| Thu Jun 14 01:28:26 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:26 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 3|5||4fd9767369ab01b9a70406ee based on: 3|3||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:26 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 399.0 } max: { num: MaxKey } on: { num: 681.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:26 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 96 writeLock: 0
m30999| Thu Jun 14 01:28:26 [conn] moving chunk (auto): ns:test.foo at: shard0001:localhost:30001 lastmod: 3|5||000000000000000000000000 min: { num: 681.0 } max: { num: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:28:26 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|5||000000000000000000000000 min: { num: 681.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:28:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 681.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:28:25 [FileAllocator] allocating new datafile /data/db/auto11/test.3, filling with zeroes...
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:25 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:26 [conn4] request split points lookup for chunk test.foo { : 399.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:26 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 399.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:26 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 399.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 681.0 } ], shardId: "test.foo-num_399.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:26 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd9767a1a349e6162f9a589
m30001| Thu Jun 14 01:28:26 [conn4] splitChunk accepted at version 3|0||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:26-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651706029), what: "split", ns: "test.foo", details: { before: { min: { num: 399.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 399.0 }, max: { num: 681.0 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') }, right: { min: { num: 681.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('4fd9767369ab01b9a70406ee') } } }
m30001| Thu Jun 14 01:28:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:26 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 681.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_681.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:26 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' acquired, ts : 4fd9767a1a349e6162f9a58a
m30001| Thu Jun 14 01:28:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:26-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651706033), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 681.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:26 [conn4] moveChunk request accepted at version 3|5||4fd9767369ab01b9a70406ee
m30001| Thu Jun 14 01:28:26 [conn4] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:27 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 681.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:27 [conn4] moveChunk setting version to: 4|0||4fd9767369ab01b9a70406ee
m30000| Thu Jun 14 01:28:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 681.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:27 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:27-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651707044), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 681.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30999| Thu Jun 14 01:28:27 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:27 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 4|1||4fd9767369ab01b9a70406ee based on: 3|5||4fd9767369ab01b9a70406ee
m30999| Thu Jun 14 01:28:27 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), serverID: ObjectId('4fd9767369ab01b9a70406ec'), shard: "shard0000", shardHost: "localhost:30000" } 0xa3f4748
m30999| Thu Jun 14 01:28:27 [conn] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('4fd9767369ab01b9a70406ee'), ok: 1.0 }
m30001| Thu Jun 14 01:28:27 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 681.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:27 [conn4] moveChunk updating self version to: 4|1||4fd9767369ab01b9a70406ee through { num: MinKey } -> { num: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:28:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:27-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651707048), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 681.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:27 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:27 [conn4] moveChunk deleted: 1
m30001| Thu Jun 14 01:28:27 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651700:1911514757' unlocked.
m30001| Thu Jun 14 01:28:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:27-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58857", time: new Date(1339651707049), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 681.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:28:27 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 681.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_681.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:6989 w:862 reslen:37 1016ms
m30999| Thu Jun 14 01:28:27 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 681.0 } max: { num: MaxKey } dataWritten: 8653603 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:27 [conn] chunk not full enough to trigger auto-split no split entry
ShardingTest test.foo-num_MinKey 4000|1 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0001 test.foo
test.foo-num_0.0 1000|3 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 2000|2 { "num" : 11 } -> { "num" : 76 } shard0001 test.foo
test.foo-num_76.0 2000|4 { "num" : 76 } -> { "num" : 131 } shard0001 test.foo
test.foo-num_131.0 2000|5 { "num" : 131 } -> { "num" : 199 } shard0001 test.foo
test.foo-num_199.0 3000|2 { "num" : 199 } -> { "num" : 262 } shard0000 test.foo
test.foo-num_262.0 3000|3 { "num" : 262 } -> { "num" : 399 } shard0000 test.foo
test.foo-num_399.0 3000|4 { "num" : 399 } -> { "num" : 681 } shard0001 test.foo
test.foo-num_681.0 4000|0 { "num" : 681 } -> { "num" : { $maxKey : 1 } } shard0000 test.foo
m30000| Thu Jun 14 01:28:27 [conn6] request split points lookup for chunk test.foo { : 681.0 } -->> { : MaxKey }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : { $minKey : 1 } } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : { $minKey : 1 } } -> { "num" : 0 }),({ "num" : 0 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:20 GMT-0400 (EDT) split test.foo { "num" : 0 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 0 } -> { "num" : 11 }),({ "num" : 11 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:21 GMT-0400 (EDT) split test.foo { "num" : 11 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 11 } -> { "num" : 199 }),({ "num" : 199 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:21 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 1814, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 202 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 199 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 2006, "step5 of 6" : 16, "step6 of 6" : 0 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:23 GMT-0400 (EDT) split test.foo { "num" : 11 } -> { "num" : 199 } -->> ({ "num" : 11 } -> { "num" : 76 }),({ "num" : 76 } -> { "num" : 199 })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:24 GMT-0400 (EDT) split test.foo { "num" : 76 } -> { "num" : 199 } -->> ({ "num" : 76 } -> { "num" : 131 }),({ "num" : 131 } -> { "num" : 199 })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:24 GMT-0400 (EDT) split test.foo { "num" : 199 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 199 } -> { "num" : 399 }),({ "num" : 399 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:24 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 399 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:25 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 399 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1009 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:25 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 399 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:25 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 399 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1002, "step5 of 6" : 12, "step6 of 6" : 344 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:25 GMT-0400 (EDT) split test.foo { "num" : 199 } -> { "num" : 399 } -->> ({ "num" : 199 } -> { "num" : 262 }),({ "num" : 262 } -> { "num" : 399 })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:26 GMT-0400 (EDT) split test.foo { "num" : 399 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 399 } -> { "num" : 681 }),({ "num" : 681 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:26 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 681 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:27 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 681 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1009 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:27 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 681 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:27 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 681 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1002, "step5 of 6" : 12, "step6 of 6" : 0 }
3,3,5,9
m30999| Thu Jun 14 01:28:27 [conn] RunOnAllShardsCommand db: test cmd:{ dbstats: 1.0, scale: undefined }
m30001| Thu Jun 14 01:28:28 [FileAllocator] done allocating datafile /data/db/auto11/test.3, size: 128MB, took 2.713 secs
m30001| Thu Jun 14 01:28:28 [conn4] command test.$cmd command: { dbstats: 1.0, scale: undefined } ntoreturn:1 keyUpdates:0 locks(micros) r:1425232 w:862 reslen:203 1418ms
{
"raw" : {
"localhost:30000" : {
"db" : "test",
"collections" : 3,
"objects" : 426,
"avgObjSize" : 50541.18309859155,
"dataSize" : 21530544,
"storageSize" : 35090432,
"numExtents" : 7,
"indexes" : 2,
"indexSize" : 49056,
"fileSize" : 117440512,
"nsSizeMB" : 16,
"ok" : 1
},
"localhost:30001" : {
"db" : "test",
"collections" : 3,
"objects" : 856,
"avgObjSize" : 50900.26168224299,
"dataSize" : 43570624,
"storageSize" : 59101184,
"numExtents" : 8,
"indexes" : 2,
"indexSize" : 81760,
"fileSize" : 251658240,
"nsSizeMB" : 16,
"ok" : 1
}
},
"objects" : 1282,
"avgObjSize" : 50780.94227769111,
"dataSize" : 65101168,
"storageSize" : 94191616,
"numExtents" : 15,
"indexes" : 4,
"indexSize" : 130816,
"fileSize" : 369098752,
"ok" : 1
}
m30999| Thu Jun 14 01:28:28 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:28:28 [conn3] end connection 127.0.0.1:38956 (11 connections now open)
m30000| Thu Jun 14 01:28:28 [conn4] end connection 127.0.0.1:38959 (10 connections now open)
m30001| Thu Jun 14 01:28:28 [conn4] end connection 127.0.0.1:58857 (4 connections now open)
m30000| Thu Jun 14 01:28:28 [conn6] end connection 127.0.0.1:38961 (9 connections now open)
m30001| Thu Jun 14 01:28:28 [conn3] end connection 127.0.0.1:58855 (3 connections now open)
m30000| Thu Jun 14 01:28:28 [conn7] end connection 127.0.0.1:38964 (8 connections now open)
Thu Jun 14 01:28:29 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:28:29 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:28:29 [interruptThread] now exiting
m30000| Thu Jun 14 01:28:29 dbexit:
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:28:29 [interruptThread] closing listening socket: 17
m30000| Thu Jun 14 01:28:29 [interruptThread] closing listening socket: 18
m30000| Thu Jun 14 01:28:29 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:28:29 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:28:29 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:28:29 [conn5] end connection 127.0.0.1:58860 (2 connections now open)
m30000| Thu Jun 14 01:28:29 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:28:29 [conn11] end connection 127.0.0.1:38971 (7 connections now open)
m30000| Thu Jun 14 01:28:29 dbexit: really exiting now
Thu Jun 14 01:28:30 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:28:30 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:28:30 [interruptThread] now exiting
m30001| Thu Jun 14 01:28:30 dbexit:
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:28:30 [interruptThread] closing listening socket: 20
m30001| Thu Jun 14 01:28:30 [interruptThread] closing listening socket: 21
m30001| Thu Jun 14 01:28:30 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:28:30 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:28:30 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:28:30 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:28:30 dbexit: really exiting now
Thu Jun 14 01:28:31 shell: stopped mongo program on port 30001
*** ShardingTest auto1 completed successfully in 13.334 seconds ***
13399.280071ms
Thu Jun 14 01:28:31 [initandlisten] connection accepted from 127.0.0.1:58974 #10 (9 connections now open)
*******************************************
Test : auto2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/auto2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/auto2.js";TestData.testFile = "auto2.js";TestData.testName = "auto2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:28:31 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auto20'
Thu Jun 14 01:28:31 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/auto20
m30000| Thu Jun 14 01:28:31
m30000| Thu Jun 14 01:28:31 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:28:31
m30000| Thu Jun 14 01:28:31 [initandlisten] MongoDB starting : pid=22338 port=30000 dbpath=/data/db/auto20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:28:31 [initandlisten]
m30000| Thu Jun 14 01:28:31 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:28:31 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:28:31 [initandlisten]
m30000| Thu Jun 14 01:28:31 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:28:31 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:28:31 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:28:31 [initandlisten]
m30000| Thu Jun 14 01:28:31 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:28:31 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:28:31 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:28:31 [initandlisten] options: { dbpath: "/data/db/auto20", port: 30000 }
m30000| Thu Jun 14 01:28:31 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:28:31 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/auto21'
m30000| Thu Jun 14 01:28:31 [initandlisten] connection accepted from 127.0.0.1:38975 #1 (1 connection now open)
Thu Jun 14 01:28:31 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/auto21
m30001| Thu Jun 14 01:28:31
m30001| Thu Jun 14 01:28:31 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:28:31
m30001| Thu Jun 14 01:28:31 [initandlisten] MongoDB starting : pid=22351 port=30001 dbpath=/data/db/auto21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:28:31 [initandlisten]
m30001| Thu Jun 14 01:28:31 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:28:31 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:28:31 [initandlisten]
m30001| Thu Jun 14 01:28:31 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:28:31 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:28:31 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:28:31 [initandlisten]
m30001| Thu Jun 14 01:28:31 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:28:31 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:28:31 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:28:31 [initandlisten] options: { dbpath: "/data/db/auto21", port: 30001 }
m30001| Thu Jun 14 01:28:31 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:28:31 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:58867 #1 (1 connection now open)
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38978 #2 (2 connections now open)
ShardingTest auto2 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:28:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:28:32 [FileAllocator] allocating new datafile /data/db/auto20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:28:32 [FileAllocator] creating directory /data/db/auto20/_tmp
m30999| Thu Jun 14 01:28:32 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:28:32 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22366 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:28:32 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:28:32 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:28:32 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:28:32 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:28:32 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38980 #3 (3 connections now open)
m30999| Thu Jun 14 01:28:32 [mongosMain] connected connection!
m30000| Thu Jun 14 01:28:32 [FileAllocator] done allocating datafile /data/db/auto20/config.ns, size: 16MB, took 0.246 secs
m30000| Thu Jun 14 01:28:32 [FileAllocator] allocating new datafile /data/db/auto20/config.0, filling with zeroes...
m30000| Thu Jun 14 01:28:32 [FileAllocator] done allocating datafile /data/db/auto20/config.0, size: 16MB, took 0.364 secs
m30000| Thu Jun 14 01:28:32 [FileAllocator] allocating new datafile /data/db/auto20/config.1, filling with zeroes...
m30000| Thu Jun 14 01:28:32 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn2] insert config.settings keyUpdates:0 locks(micros) w:628816 628ms
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38984 #4 (4 connections now open)
m30000| Thu Jun 14 01:28:32 [conn4] build index config.version { _id: 1 }
m30999| Thu Jun 14 01:28:32 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:32 [mongosMain] connected connection!
m30000| Thu Jun 14 01:28:32 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:32 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:28:32 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:28:32 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:28:32 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:28:32 [websvr] admin web console waiting for connections on port 31999
m30000| Thu Jun 14 01:28:32 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:28:32 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:28:32 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:28:32 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:28:32 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:28:32 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:28:32
m30999| Thu Jun 14 01:28:32 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:32 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:28:32 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:32 [Balancer] connected connection!
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38985 #5 (5 connections now open)
m30999| Thu Jun 14 01:28:32 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:28:32 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:28:32 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:28:32 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97680a32f88a76fac6272" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:28:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' acquired, ts : 4fd97680a32f88a76fac6272
m30999| Thu Jun 14 01:28:32 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:28:32 [Balancer] no collections to balance
m30999| Thu Jun 14 01:28:32 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:28:32 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:28:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' unlocked.
m30000| Thu Jun 14 01:28:32 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:32 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651712:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:28:32 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:32 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:28:32 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:28:32 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:28:32 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651712:1804289383', sleeping for 30000ms
Thu Jun 14 01:28:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:28:32 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:28:32 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22386 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:28:32 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:28:32 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:28:32 [mongosMain] options: { configdb: "localhost:30000", port: 30998, verbose: true }
m30998| Thu Jun 14 01:28:32 [mongosMain] config string : localhost:30000
m30998| Thu Jun 14 01:28:32 [mongosMain] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38987 #6 (6 connections now open)
m30999| Thu Jun 14 01:28:32 [mongosMain] connection accepted from 127.0.0.1:51073 #1 (1 connection now open)
m30998| Thu Jun 14 01:28:32 [mongosMain] connected connection!
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:28:32 [CheckConfigServers] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:28:32 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:28:32 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38989 #7 (7 connections now open)
m30998| Thu Jun 14 01:28:32 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:28:32 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:28:32 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:28:32 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:28:32 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:28:32 [CheckConfigServers] connected connection!
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:28:32 [Balancer] connected connection!
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38990 #8 (8 connections now open)
m30998| Thu Jun 14 01:28:32 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:28:32 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:28:32
m30998| Thu Jun 14 01:28:32 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:28:32 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:28:32 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:28:32 [Balancer] connected connection!
m30000| Thu Jun 14 01:28:32 [initandlisten] connection accepted from 127.0.0.1:38991 #9 (9 connections now open)
m30998| Thu Jun 14 01:28:32 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:28:32 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651712:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651712:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651712:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:28:32 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd976805714d7a403a48dcf" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd97680a32f88a76fac6272" } }
m30998| Thu Jun 14 01:28:32 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339651712:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:28:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651712:1804289383' acquired, ts : 4fd976805714d7a403a48dcf
m30998| Thu Jun 14 01:28:32 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:28:32 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30998:1339651712:1804289383', sleeping for 30000ms
m30998| Thu Jun 14 01:28:32 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:28:32 [Balancer] no collections to balance
m30998| Thu Jun 14 01:28:32 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:28:32 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:28:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651712:1804289383' unlocked.
m30998| Thu Jun 14 01:28:33 [mongosMain] connection accepted from 127.0.0.1:35411 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:28:33 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:28:33 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:28:33 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:33 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:28:33 [FileAllocator] done allocating datafile /data/db/auto20/config.1, size: 32MB, took 0.569 secs
m30000| Thu Jun 14 01:28:33 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:356 w:1483 reslen:177 166ms
m30999| Thu Jun 14 01:28:33 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:58883 #2 (2 connections now open)
m30999| Thu Jun 14 01:28:33 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:33 [conn] connected connection!
m30999| Thu Jun 14 01:28:33 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:28:33 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:28:33 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:28:33 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:33 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:28:33 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { num: 1.0 } }
m30999| Thu Jun 14 01:28:33 [conn] enable sharding on: test.foo with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:28:33 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:33 [FileAllocator] allocating new datafile /data/db/auto21/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:28:33 [FileAllocator] creating directory /data/db/auto21/_tmp
m30001| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:58885 #3 (3 connections now open)
m30999| Thu Jun 14 01:28:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd97681a32f88a76fac6273 based on: (empty)
m30999| Thu Jun 14 01:28:33 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:33 [conn] connected connection!
m30999| Thu Jun 14 01:28:33 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97680a32f88a76fac6271
m30999| Thu Jun 14 01:28:33 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:28:33 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:28:33 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:33 [conn] connected connection!
m30999| Thu Jun 14 01:28:33 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97680a32f88a76fac6271
m30999| Thu Jun 14 01:28:33 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: WriteBackListener-localhost:30001
m30000| Thu Jun 14 01:28:33 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:28:33 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:38994 #10 (10 connections now open)
m30001| Thu Jun 14 01:28:33 [FileAllocator] done allocating datafile /data/db/auto21/test.ns, size: 16MB, took 0.362 secs
m30001| Thu Jun 14 01:28:33 [FileAllocator] allocating new datafile /data/db/auto21/test.0, filling with zeroes...
m30001| Thu Jun 14 01:28:33 [FileAllocator] done allocating datafile /data/db/auto21/test.0, size: 16MB, took 0.277 secs
m30001| Thu Jun 14 01:28:33 [FileAllocator] allocating new datafile /data/db/auto21/test.1, filling with zeroes...
m30001| Thu Jun 14 01:28:33 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:28:33 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:28:33 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:28:33 [conn2] build index test.foo { num: 1.0 }
m30001| Thu Jun 14 01:28:33 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:28:33 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:8 W:58 r:308 w:658206 658ms
m30001| Thu Jun 14 01:28:33 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97680a32f88a76fac6271'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:73 reslen:51 655ms
m30001| Thu Jun 14 01:28:33 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:38996 #11 (11 connections now open)
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 7447688 splitThreshold: 921
m30999| Thu Jun 14 01:28:33 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:33 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:33 [conn] connected connection!
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split { num: 1.0 }
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921
m30001| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:58887 #4 (4 connections now open)
m30001| Thu Jun 14 01:28:33 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:33 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:33 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] warning: chunk is larger than 1024 bytes because of key { num: 0.0 }
m30001| Thu Jun 14 01:28:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 0.0 } ], shardId: "test.foo-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97681e111968e437c8d9c
m30001| Thu Jun 14 01:28:33 [conn4] splitChunk accepted at version 1|0||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:33-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651713978), what: "split", ns: "test.foo", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:33 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651713:710306523 (sleeping for 30000ms)
m30000| Thu Jun 14 01:28:33 [initandlisten] connection accepted from 127.0.0.1:38998 #12 (12 connections now open)
m30999| Thu Jun 14 01:28:33 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 3 version: 1|2||4fd97681a32f88a76fac6273 based on: 1|0||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } on: { num: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 3547427 splitThreshold: 471859
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859
m30999| Thu Jun 14 01:28:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd97681a32f88a76fac6273 based on: 1|2||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 0.0 } max: { num: MaxKey } on: { num: 11.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:28:33 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:28:33 [conn] recently split chunk: { min: { num: 11.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 8364002 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:33 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:33 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 0.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 11.0 } ], shardId: "test.foo-num_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97681e111968e437c8d9d
m30001| Thu Jun 14 01:28:33 [conn4] splitChunk accepted at version 1|2||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:33-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651713987), what: "split", ns: "test.foo", details: { before: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 0.0 }, max: { num: 11.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 11.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
j:0 : 98
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] chunk not full enough to trigger auto-split no split entry
j:1 : 202
m30001| Thu Jun 14 01:28:34 [FileAllocator] done allocating datafile /data/db/auto21/test.1, size: 32MB, took 0.74 secs
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] chunk not full enough to trigger auto-split no split entry
j:2 : 494
m30001| Thu Jun 14 01:28:34 [conn3] insert test.foo keyUpdates:0 locks(micros) W:83 r:289 w:654344 647ms
m30001| Thu Jun 14 01:28:34 [conn4] request split points lookup for chunk test.foo { : 11.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 11.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 11.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 294.0 } ], shardId: "test.foo-num_11.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97682e111968e437c8d9e
m30001| Thu Jun 14 01:28:34 [conn4] splitChunk accepted at version 1|4||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:34-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651714765), what: "split", ns: "test.foo", details: { before: { min: { num: 11.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 11.0 }, max: { num: 294.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 294.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:34 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 294.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_294.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97682e111968e437c8d9f
m30001| Thu Jun 14 01:28:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:34-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651714769), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 294.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:34 [conn4] moveChunk request accepted at version 1|6||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:34 [conn4] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:34 [initandlisten] connection accepted from 127.0.0.1:58889 #5 (5 connections now open)
m30001| Thu Jun 14 01:28:34 [FileAllocator] allocating new datafile /data/db/auto21/test.2, filling with zeroes...
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd97681a32f88a76fac6273 based on: 1|4||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { num: 11.0 } max: { num: MaxKey } on: { num: 294.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:34 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:28:34 [conn] moving chunk (auto): ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:28:34 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:28:34 [FileAllocator] allocating new datafile /data/db/auto20/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:28:35 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 294.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:36 [FileAllocator] done allocating datafile /data/db/auto21/test.2, size: 64MB, took 1.532 secs
m30000| Thu Jun 14 01:28:36 [FileAllocator] done allocating datafile /data/db/auto20/test.ns, size: 16MB, took 1.516 secs
m30000| Thu Jun 14 01:28:36 [FileAllocator] allocating new datafile /data/db/auto20/test.0, filling with zeroes...
m30000| Thu Jun 14 01:28:36 [FileAllocator] done allocating datafile /data/db/auto20/test.0, size: 16MB, took 0.291 secs
m30000| Thu Jun 14 01:28:36 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:28:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:36 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:28:36 [migrateThread] build index test.foo { num: 1.0 }
m30000| Thu Jun 14 01:28:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 294.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:36 [FileAllocator] allocating new datafile /data/db/auto20/test.1, filling with zeroes...
m30001| Thu Jun 14 01:28:36 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 294.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:36 [conn4] moveChunk setting version to: 2|0||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 294.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:36 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:36-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651716780), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 294.0 }, max: { num: MaxKey }, step1 of 5: 1834, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 174 } }
m30000| Thu Jun 14 01:28:36 [initandlisten] connection accepted from 127.0.0.1:39000 #13 (13 connections now open)
m30999| Thu Jun 14 01:28:36 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 2|1||4fd97681a32f88a76fac6273 based on: 1|6||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:36 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:36 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:28:36 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:36 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:28:36 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 7844383 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:36 [conn] chunk not full enough to trigger auto-split no split entry
j:3 : 2027
m30001| Thu Jun 14 01:28:36 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 294.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:36 [conn4] moveChunk updating self version to: 2|1||4fd97681a32f88a76fac6273 through { num: MinKey } -> { num: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:28:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:36-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651716785), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 294.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:36 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:36 [conn4] moveChunk deleted: 1
m30001| Thu Jun 14 01:28:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:36-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651716786), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 294.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:28:36 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 294.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_294.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1227 w:503 reslen:37 2018ms
m30000| Thu Jun 14 01:28:36 [conn10] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:28:36 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:36 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:28:37 [FileAllocator] done allocating datafile /data/db/auto20/test.1, size: 32MB, took 0.686 secs
j:4 : 505
m30000| Thu Jun 14 01:28:37 [conn10] insert test.foo keyUpdates:0 locks(micros) W:99 w:487708 480ms
m30000| Thu Jun 14 01:28:37 [FileAllocator] allocating new datafile /data/db/auto20/test.2, filling with zeroes...
m30999| Thu Jun 14 01:28:37 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:37 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:37 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:37 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:37 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:37 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:37 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:37 [conn] chunk not full enough to trigger auto-split no split entry
j:5 : 82
m30000| Thu Jun 14 01:28:37 [conn5] request split points lookup for chunk test.foo { : 294.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:37 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 294.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:37 [initandlisten] connection accepted from 127.0.0.1:39001 #14 (14 connections now open)
m30000| Thu Jun 14 01:28:37 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 294.0 }, max: { num: MaxKey }, from: "shard0000", splitKeys: [ { num: 577.0 } ], shardId: "test.foo-num_294.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:37 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:37 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' acquired, ts : 4fd97685edeaba0fafe1ab9c
m30000| Thu Jun 14 01:28:37 [conn5] splitChunk accepted at version 2|0||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:37 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:37-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651717392), what: "split", ns: "test.foo", details: { before: { min: { num: 294.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 294.0 }, max: { num: 577.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 577.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30000| Thu Jun 14 01:28:37 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' unlocked.
m30000| Thu Jun 14 01:28:37 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651717:21751690 (sleeping for 30000ms)
m30000| Thu Jun 14 01:28:37 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 577.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_577.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:37 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:37 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' acquired, ts : 4fd97685edeaba0fafe1ab9d
m30000| Thu Jun 14 01:28:37 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:37-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651717396), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 577.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:37 [initandlisten] connection accepted from 127.0.0.1:39002 #15 (15 connections now open)
m30000| Thu Jun 14 01:28:37 [conn5] moveChunk request accepted at version 2|3||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:37 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:37 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 577.0 } -> { num: MaxKey }
m30999| Thu Jun 14 01:28:37 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 2|3||4fd97681a32f88a76fac6273 based on: 2|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 294.0 } max: { num: MaxKey } on: { num: 577.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:37 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:28:37 [conn] moving chunk (auto): ns:test.foo at: shard0000:localhost:30000 lastmod: 2|3||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } to: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:37 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0000:localhost:30000 lastmod: 2|3||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:28:38 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 577.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:28:38 [conn5] moveChunk setting version to: 3|0||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 577.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:28:38 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:38-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651718405), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 577.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1005 } }
m30000| Thu Jun 14 01:28:38 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { num: 577.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:28:38 [conn5] moveChunk updating self version to: 3|1||4fd97681a32f88a76fac6273 through { num: 294.0 } -> { num: 577.0 } for collection 'test.foo'
m30000| Thu Jun 14 01:28:38 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:38-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651718409), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 577.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:38 [conn5] doing delete inline
m30000| Thu Jun 14 01:28:38 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:28:38 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' unlocked.
m30000| Thu Jun 14 01:28:38 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:38-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651718710), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 577.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 300 } }
m30000| Thu Jun 14 01:28:38 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 577.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_577.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2669 w:301553 reslen:37 1314ms
m30000| Thu Jun 14 01:28:38 [FileAllocator] done allocating datafile /data/db/auto20/test.2, size: 64MB, took 1.371 secs
m30999| Thu Jun 14 01:28:38 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 3|1||4fd97681a32f88a76fac6273 based on: 2|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 3973102 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split no split entry
j:6 : 1356
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split { num: 704.0 }
j:7 : 90
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split { num: 704.0 }
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split { num: 704.0 }
j:8 : 82
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 577.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 577.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 577.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 860.0 } ], shardId: "test.foo-num_577.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97686e111968e437c8da0
m30001| Thu Jun 14 01:28:38 [conn4] splitChunk accepted at version 3|0||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:38-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651718916), what: "split", ns: "test.foo", details: { before: { min: { num: 577.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 577.0 }, max: { num: 860.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 860.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 3|3||4fd97681a32f88a76fac6273 based on: 3|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { num: 577.0 } max: { num: MaxKey } on: { num: 860.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:38 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:28:38 [conn] recently split chunk: { min: { num: 860.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|3, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:38 [conn] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2392414 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:38 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:38 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:38 [FileAllocator] allocating new datafile /data/db/auto21/test.3, filling with zeroes...
j:9 : 84
m30001| Thu Jun 14 01:28:39 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:39 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
j:10 : 82
m30001| Thu Jun 14 01:28:39 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:39 [conn4] request split points lookup for chunk test.foo { : 860.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:39 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 860.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:39 [conn] chunk not full enough to trigger auto-split { num: 987.0 }
m30999| Thu Jun 14 01:28:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:39 [conn] chunk not full enough to trigger auto-split { num: 987.0 }
m30999| Thu Jun 14 01:28:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:39 [conn] chunk not full enough to trigger auto-split { num: 987.0 }
m30999| Thu Jun 14 01:28:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:39 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 860.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 1137.0 } ], shardId: "test.foo-num_860.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:39 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97687e111968e437c8da1
m30001| Thu Jun 14 01:28:39 [conn4] splitChunk accepted at version 3|3||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:39-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651719126), what: "split", ns: "test.foo", details: { before: { min: { num: 860.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 860.0 }, max: { num: 1137.0 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 1137.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:39 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 1137.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1137.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:39 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97687e111968e437c8da2
m30001| Thu Jun 14 01:28:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:39-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651719132), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 1137.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:39 [conn4] moveChunk request accepted at version 3|5||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:39 [conn4] moveChunk number of documents: 1
m30000| Thu Jun 14 01:28:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1137.0 } -> { num: MaxKey }
m30999| Thu Jun 14 01:28:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 3|5||4fd97681a32f88a76fac6273 based on: 3|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { num: 860.0 } max: { num: MaxKey } on: { num: 1137.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:39 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 96 writeLock: 0
m30999| Thu Jun 14 01:28:39 [conn] moving chunk (auto): ns:test.foo at: shard0001:localhost:30001 lastmod: 3|5||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:28:39 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|5||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
j:11 : 120
m30001| Thu Jun 14 01:28:40 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1137.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:40 [conn4] moveChunk setting version to: 4|0||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1137.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:28:40 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651720137), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 1137.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1002 } }
m30001| Thu Jun 14 01:28:40 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 1137.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:40 [conn4] moveChunk updating self version to: 4|1||4fd97681a32f88a76fac6273 through { num: MinKey } -> { num: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:28:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651720141), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 1137.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:40 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:40 [conn4] moveChunk deleted: 1
m30001| Thu Jun 14 01:28:40 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651720142), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 1137.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:28:40 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 1137.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1137.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:5908 w:905 reslen:37 1011ms
m30999| Thu Jun 14 01:28:40 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 4|1||4fd97681a32f88a76fac6273 based on: 3|5||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:40 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 8437080 splitThreshold: 11796480
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
j:12 : 990
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1264.0 }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1264.0 }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1264.0 }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
j:13 : 74
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 1137.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1137.0 }, max: { num: MaxKey }, from: "shard0000", splitKeys: [ { num: 1420.0 } ], shardId: "test.foo-num_1137.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:40 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30000| Thu Jun 14 01:28:40 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' acquired, ts : 4fd97688edeaba0fafe1ab9e
m30000| Thu Jun 14 01:28:40 [conn5] splitChunk accepted at version 4|0||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:40 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651720282), what: "split", ns: "test.foo", details: { before: { min: { num: 1137.0 }, max: { num: MaxKey }, lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1137.0 }, max: { num: 1420.0 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 1420.0 }, max: { num: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30000| Thu Jun 14 01:28:40 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' unlocked.
m30999| Thu Jun 14 01:28:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 4|3||4fd97681a32f88a76fac6273 based on: 4|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { num: 1137.0 } max: { num: MaxKey } on: { num: 1420.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:40 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 96 writeLock: 0
m30999| Thu Jun 14 01:28:40 [conn] recently split chunk: { min: { num: 1420.0 }, max: { num: MaxKey } } already in the best shard: shard0000:localhost:30000
m30999| Thu Jun 14 01:28:40 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|3, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:40 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2364250 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split no split entry
j:14 : 73
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1547.0 }
m30000| Thu Jun 14 01:28:40 [FileAllocator] allocating new datafile /data/db/auto20/test.3, filling with zeroes...
j:15 : 77
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
j:16 : 97
m30000| Thu Jun 14 01:28:40 [conn5] request split points lookup for chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:28:40 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 1420.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1547.0 }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:40 [conn] chunk not full enough to trigger auto-split { num: 1547.0 }
m30999| Thu Jun 14 01:28:40 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30000| Thu Jun 14 01:28:40 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1420.0 }, max: { num: MaxKey }, from: "shard0000", splitKeys: [ { num: 1692.0 } ], shardId: "test.foo-num_1420.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:40 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:40 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' acquired, ts : 4fd97688edeaba0fafe1ab9f
m30000| Thu Jun 14 01:28:40 [conn5] splitChunk accepted at version 4|3||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:40 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651720529), what: "split", ns: "test.foo", details: { before: { min: { num: 1420.0 }, max: { num: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1420.0 }, max: { num: 1692.0 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 1692.0 }, max: { num: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30000| Thu Jun 14 01:28:40 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' unlocked.
m30999| Thu Jun 14 01:28:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 4|5||4fd97681a32f88a76fac6273 based on: 4|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { num: 1420.0 } max: { num: MaxKey } on: { num: 1692.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:40 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 128 writeLock: 0
m30999| Thu Jun 14 01:28:40 [conn] moving chunk (auto): ns:test.foo at: shard0000:localhost:30000 lastmod: 4|5||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } to: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:40 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0000:localhost:30000 lastmod: 4|5||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:28:40 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 1692.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1692.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:28:40 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:40 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' acquired, ts : 4fd97688edeaba0fafe1aba0
m30000| Thu Jun 14 01:28:40 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:40-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651720534), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 1692.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:40 [conn5] moveChunk request accepted at version 4|5||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:40 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:28:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1692.0 } -> { num: MaxKey }
j:17 : 319
m30000| Thu Jun 14 01:28:41 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 1692.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:28:41 [conn5] moveChunk setting version to: 5|0||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1692.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:28:41 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:41-12", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651721545), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 1692.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30000| Thu Jun 14 01:28:41 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { num: 1692.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:28:41 [conn5] moveChunk updating self version to: 5|1||4fd97681a32f88a76fac6273 through { num: 294.0 } -> { num: 577.0 } for collection 'test.foo'
m30000| Thu Jun 14 01:28:41 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:41-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651721549), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 1692.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:28:41 [conn5] doing delete inline
m30000| Thu Jun 14 01:28:42 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:28:42 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339651717:21751690' unlocked.
m30000| Thu Jun 14 01:28:42 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:42-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:38985", time: new Date(1339651722057), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 1692.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 506 } }
m30000| Thu Jun 14 01:28:42 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 1692.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1692.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:10251 w:808379 reslen:37 1524ms
m30999| Thu Jun 14 01:28:42 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 5|1||4fd97681a32f88a76fac6273 based on: 4|5||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:42 [conn] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 5855115 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
j:18 : 1450
m30001| Thu Jun 14 01:28:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:95 r:776 w:959546 234ms
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split { num: 1819.0 }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split { num: 1819.0 }
j:19 : 364
m30001| Thu Jun 14 01:28:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:95 r:776 w:1223315 246ms
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:42 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1692.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:42 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1692.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 1975.0 } ], shardId: "test.foo-num_1692.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split { num: 1819.0 }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480
m30001| Thu Jun 14 01:28:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd9768ae111968e437c8da3
m30001| Thu Jun 14 01:28:42 [conn4] splitChunk accepted at version 5|0||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:42-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651722661), what: "split", ns: "test.foo", details: { before: { min: { num: 1692.0 }, max: { num: MaxKey }, lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1692.0 }, max: { num: 1975.0 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') }, right: { min: { num: 1975.0 }, max: { num: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273') } } }
m30001| Thu Jun 14 01:28:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
j:20 : 63
m30001| Thu Jun 14 01:28:42 [FileAllocator] done allocating datafile /data/db/auto21/test.3, size: 128MB, took 3.68 secs
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 5|3||4fd97681a32f88a76fac6273 based on: 5|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { num: 1692.0 } max: { num: MaxKey } on: { num: 1975.0 } (splitThreshold 11796480) (migrate suggested)
m30999| Thu Jun 14 01:28:42 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 128 writeLock: 0
m30999| Thu Jun 14 01:28:42 [conn] recently split chunk: { min: { num: 1975.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:28:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|3, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:42 [conn] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4728632 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:28:42 [conn4] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:42 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:42 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:42 [initandlisten] connection accepted from 127.0.0.1:39003 #16 (16 connections now open)
m30001| Thu Jun 14 01:28:42 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd9768ae111968e437c8da4
m30001| Thu Jun 14 01:28:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:42-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651722782), what: "moveChunk.start", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:42 [conn4] moveChunk request accepted at version 5|3||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:42 [conn4] moveChunk number of documents: 0
m30999| Thu Jun 14 01:28:42 [Balancer] connected connection!
m30999| Thu Jun 14 01:28:42 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:28:42 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:28:42 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9768aa32f88a76fac6274" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976805714d7a403a48dcf" } }
m30999| Thu Jun 14 01:28:42 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' acquired, ts : 4fd9768aa32f88a76fac6274
m30999| Thu Jun 14 01:28:42 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:28:42 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:28:42 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:42 [Balancer] shard0001 maxSize: 0 currSize: 128 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:42 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:28:42 [Balancer] shard0000
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_294.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 294.0 }, max: { num: 577.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_1137.0", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1137.0 }, max: { num: 1420.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_1420.0", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1420.0 }, max: { num: 1692.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:42 [Balancer] shard0001
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_MinKey", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: MinKey }, max: { num: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 0.0 }, max: { num: 11.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_11.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 11.0 }, max: { num: 294.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_577.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 577.0 }, max: { num: 860.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_860.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 860.0 }, max: { num: 1137.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_1692.0", lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1692.0 }, max: { num: 1975.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] { _id: "test.foo-num_1975.0", lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1975.0 }, max: { num: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] ----
m30999| Thu Jun 14 01:28:42 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:28:42 [Balancer] donor : 7 chunks on shard0001
m30999| Thu Jun 14 01:28:42 [Balancer] receiver : 3 chunks on shard0000
m30999| Thu Jun 14 01:28:42 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-num_MinKey", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: MinKey }, max: { num: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:42 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { num: MinKey } max: { num: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:28:42 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: MinKey } -> { num: 0.0 }
j:21 : 90
m30001| Thu Jun 14 01:28:42 [initandlisten] connection accepted from 127.0.0.1:58894 #6 (6 connections now open)
m30001| Thu Jun 14 01:28:42 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:42 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:42 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:42 [conn] connected connection!
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30001| Thu Jun 14 01:28:42 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split { num: 2230.0 }
j:22 : 77
m30001| Thu Jun 14 01:28:42 [initandlisten] connection accepted from 127.0.0.1:58895 #7 (7 connections now open)
m30000| Thu Jun 14 01:28:42 [initandlisten] connection accepted from 127.0.0.1:39006 #17 (17 connections now open)
m30001| Thu Jun 14 01:28:42 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:42 [conn] chunk not full enough to trigger auto-split { num: 2230.0 }
m30998| Thu Jun 14 01:28:42 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:28:42 [Balancer] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:28:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:28:42 [Balancer] connected connection!
m30998| Thu Jun 14 01:28:42 [Balancer] checking last ping for lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' against process and ping Wed Dec 31 19:00:00 1969
m30998| Thu Jun 14 01:28:42 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:28:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:28:42 [Balancer] connected connection!
m30998| Thu Jun 14 01:28:42 [Balancer] could not force lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' because elapsed time 0 <= takeover time 900000
m30998| Thu Jun 14 01:28:42 [Balancer] skipping balancing round because another balancer is active
j:23 : 89
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:43 [conn] chunk not full enough to trigger auto-split { num: 2230.0 }
j:24 : 82
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2576.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
j:25 : 74
m30000| Thu Jun 14 01:28:43 [initandlisten] connection accepted from 127.0.0.1:39007 #18 (18 connections now open)
m30001| Thu Jun 14 01:28:43 [conn6] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2576.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3734 reslen:329 123ms
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2576.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30001| Thu Jun 14 01:28:43 [FileAllocator] allocating new datafile /data/db/auto21/test.4, filling with zeroes...
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
j:26 : 298
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2669.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2669.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2762.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2762.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
j:27 : 126
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2855.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2855.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
j:28 : 81
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2948.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:43 [conn6] request split points lookup for chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] max number of requested split points reached (2) before the end of chunk test.foo { : 1975.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:28:43 [conn6] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2949.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:43 [conn6] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2948.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:28:43 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 5|3||4fd97681a32f88a76fac6273 based on: 5|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:43 [conn] warning: chunk manager reload forced for collection 'test.foo', config version is 5|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:43 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|3, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:43 [conn] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { num: 1975.0 } max: { num: MaxKey } dataWritten: 6058136 splitThreshold: 23592960
m30999| Thu Jun 14 01:28:43 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1975.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2949.0 } ], shardId: "test.foo-num_1975.0", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339651713:710306523", state: 2, ts: ObjectId('4fd9768ae111968e437c8da4'), when: new Date(1339651722781), who: "domU-12-31-39-01-70-B4:30001:1339651713:710306523:conn4:699453307", why: "migrate-{ num: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
j:29 : 77
done inserting data
datasize: {
"estimate" : false,
"size" : 110818240,
"numObjects" : 2162,
"millis" : 1,
"ok" : 1
}
ShardingTest test.foo-num_MinKey 4000|1 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0001 test.foo
test.foo-num_0.0 1000|3 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 1000|5 { "num" : 11 } -> { "num" : 294 } shard0001 test.foo
test.foo-num_294.0 5000|1 { "num" : 294 } -> { "num" : 577 } shard0000 test.foo
test.foo-num_577.0 3000|2 { "num" : 577 } -> { "num" : 860 } shard0001 test.foo
test.foo-num_860.0 3000|4 { "num" : 860 } -> { "num" : 1137 } shard0001 test.foo
test.foo-num_1137.0 4000|2 { "num" : 1137 } -> { "num" : 1420 } shard0000 test.foo
test.foo-num_1420.0 4000|4 { "num" : 1420 } -> { "num" : 1692 } shard0000 test.foo
test.foo-num_1692.0 5000|2 { "num" : 1692 } -> { "num" : 1975 } shard0001 test.foo
test.foo-num_1975.0 5000|3 { "num" : 1975 } -> { "num" : { $maxKey : 1 } } shard0001 test.foo
checkpoint B
m30001| Thu Jun 14 01:28:43 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:43 [conn4] moveChunk setting version to: 6|0||4fd97681a32f88a76fac6273
m30000| Thu Jun 14 01:28:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: MinKey } -> { num: 0.0 }
m30000| Thu Jun 14 01:28:43 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:43-11", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651723789), what: "moveChunk.to", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1005 } }
m30001| Thu Jun 14 01:28:43 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:43 [conn4] moveChunk updating self version to: 6|1||4fd97681a32f88a76fac6273 through { num: 0.0 } -> { num: 11.0 } for collection 'test.foo'
m30000| Thu Jun 14 01:28:43 [conn12] command config.$cmd command: { applyOps: [ { op: "u", b: false, ns: "config.chunks", o: { _id: "test.foo-num_MinKey", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: MinKey }, max: { num: 0.0 }, shard: "shard0000" }, o2: { _id: "test.foo-num_MinKey" } }, { op: "u", b: false, ns: "config.chunks", o: { _id: "test.foo-num_0.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 0.0 }, max: { num: 11.0 }, shard: "shard0001" }, o2: { _id: "test.foo-num_0.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "test.foo" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 5000|3 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:167874 r:3574 w:3789 reslen:72 164ms
m30001| Thu Jun 14 01:28:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:43-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651723958), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:43 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:43 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:28:43 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:43-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651723959), what: "moveChunk.from", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 173, step6 of 6: 0 } }
m30001| Thu Jun 14 01:28:43 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:8965 w:967 reslen:37 1177ms
m30999| Thu Jun 14 01:28:43 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:43 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 6|1||4fd97681a32f88a76fac6273 based on: 5|3||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:43 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:28:43 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' unlocked.
m30999| Thu Jun 14 01:28:44 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:44 [conn] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:44 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:44 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30000| Thu Jun 14 01:28:46 [FileAllocator] done allocating datafile /data/db/auto20/test.3, size: 128MB, took 6.071 secs
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:33 GMT-0400 (EDT) split test.foo { "num" : { $minKey : 1 } } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : { $minKey : 1 } } -> { "num" : 0 }),({ "num" : 0 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:33 GMT-0400 (EDT) split test.foo { "num" : 0 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 0 } -> { "num" : 11 }),({ "num" : 11 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:34 GMT-0400 (EDT) split test.foo { "num" : 11 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 11 } -> { "num" : 294 }),({ "num" : 294 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:34 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 294 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:36 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 294 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 1834, "step2 of 5" : 0, "step3 of 5" : 1, "step4 of 5" : 0, "step5 of 5" : 174 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:36 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 294 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:36 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 294 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 2006, "step5 of 6" : 8, "step6 of 6" : 0 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:37 GMT-0400 (EDT) split test.foo { "num" : 294 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 294 } -> { "num" : 577 }),({ "num" : 577 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:37 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 577 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:38 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 577 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1005 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:38 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 577 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:38 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 577 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1003, "step5 of 6" : 8, "step6 of 6" : 300 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:38 GMT-0400 (EDT) split test.foo { "num" : 577 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 577 } -> { "num" : 860 }),({ "num" : 860 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:39 GMT-0400 (EDT) split test.foo { "num" : 860 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 860 } -> { "num" : 1137 }),({ "num" : 1137 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:39 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 1137 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 1137 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1002 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 1137 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 1137 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1000, "step5 of 6" : 8, "step6 of 6" : 0 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) split test.foo { "num" : 1137 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 1137 } -> { "num" : 1420 }),({ "num" : 1420 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) split test.foo { "num" : 1420 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 1420 } -> { "num" : 1692 }),({ "num" : 1692 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:40 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : 1692 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:41 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : 1692 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1008 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:41 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : 1692 }, "max" : { "num" : { $maxKey : 1 } }, "from" : "shard0000", "to" : "shard0001" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:42 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : 1692 }, "max" : { "num" : { $maxKey : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1002, "step5 of 6" : 12, "step6 of 6" : 506 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:42 GMT-0400 (EDT) split test.foo { "num" : 1692 } -> { "num" : { $maxKey : 1 } } -->> ({ "num" : 1692 } -> { "num" : 1975 }),({ "num" : 1975 } -> { "num" : { $maxKey : 1 } })
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:42 GMT-0400 (EDT) moveChunk.start test.foo { "min" : { "num" : { $minKey : 1 } }, "max" : { "num" : 0 }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:43 GMT-0400 (EDT) moveChunk.to test.foo { "min" : { "num" : { $minKey : 1 } }, "max" : { "num" : 0 }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 1005 }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:43 GMT-0400 (EDT) moveChunk.commit test.foo { "min" : { "num" : { $minKey : 1 } }, "max" : { "num" : 0 }, "from" : "shard0001", "to" : "shard0000" }
ShardingTest domU-12-31-39-01-70-B4 Thu Jun 14 2012 01:28:43 GMT-0400 (EDT) moveChunk.from test.foo { "min" : { "num" : { $minKey : 1 } }, "max" : { "num" : 0 }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 1002, "step5 of 6" : 173, "step6 of 6" : 0 }
missing: [ ]
checkpoint B.a
ShardingTest test.foo-num_MinKey 6000|0 { "num" : { $minKey : 1 } } -> { "num" : 0 } shard0000 test.foo
test.foo-num_0.0 6000|1 { "num" : 0 } -> { "num" : 11 } shard0001 test.foo
test.foo-num_11.0 1000|5 { "num" : 11 } -> { "num" : 294 } shard0001 test.foo
test.foo-num_294.0 5000|1 { "num" : 294 } -> { "num" : 577 } shard0000 test.foo
test.foo-num_577.0 3000|2 { "num" : 577 } -> { "num" : 860 } shard0001 test.foo
test.foo-num_860.0 3000|4 { "num" : 860 } -> { "num" : 1137 } shard0001 test.foo
test.foo-num_1137.0 4000|2 { "num" : 1137 } -> { "num" : 1420 } shard0000 test.foo
test.foo-num_1420.0 4000|4 { "num" : 1420 } -> { "num" : 1692 } shard0000 test.foo
test.foo-num_1692.0 5000|2 { "num" : 1692 } -> { "num" : 1975 } shard0001 test.foo
test.foo-num_1975.0 5000|3 { "num" : 1975 } -> { "num" : { $maxKey : 1 } } shard0001 test.foo
m30999| Thu Jun 14 01:28:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:28:48 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651712:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:28:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97690a32f88a76fac6275" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd9768aa32f88a76fac6274" } }
m30999| Thu Jun 14 01:28:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' acquired, ts : 4fd97690a32f88a76fac6275
m30999| Thu Jun 14 01:28:48 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:28:48 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:28:48 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:48 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:28:48 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:28:48 [Balancer] shard0000
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_MinKey", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: MinKey }, max: { num: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_294.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 294.0 }, max: { num: 577.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_1137.0", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1137.0 }, max: { num: 1420.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_1420.0", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1420.0 }, max: { num: 1692.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:28:48 [Balancer] shard0001
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_0.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 0.0 }, max: { num: 11.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_11.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 11.0 }, max: { num: 294.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_577.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 577.0 }, max: { num: 860.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_860.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 860.0 }, max: { num: 1137.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_1692.0", lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1692.0 }, max: { num: 1975.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] { _id: "test.foo-num_1975.0", lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 1975.0 }, max: { num: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] ----
m30999| Thu Jun 14 01:28:48 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:28:48 [Balancer] donor : 6 chunks on shard0001
m30999| Thu Jun 14 01:28:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Thu Jun 14 01:28:48 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-num_0.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", min: { num: 0.0 }, max: { num: 11.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:28:48 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 6|1||000000000000000000000000 min: { num: 0.0 } max: { num: 11.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:28:48 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 0.0 }, max: { num: 11.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:28:48 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:28:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' acquired, ts : 4fd97690e111968e437c8da5
m30001| Thu Jun 14 01:28:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:48-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651728988), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 11.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:48 [conn4] moveChunk request accepted at version 6|1||4fd97681a32f88a76fac6273
m30001| Thu Jun 14 01:28:48 [conn4] moveChunk number of documents: 11
m30000| Thu Jun 14 01:28:49 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 0.0 } -> { num: 11.0 }
checkpoint C
m30001| Thu Jun 14 01:28:49 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 11.0 }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 11, clonedBytes: 563805, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:28:49 [conn4] moveChunk setting version to: 7|0||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:49 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 6|1||4fd97681a32f88a76fac6273 based on: 6|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:49 [conn] warning: chunk manager reload forced for collection 'test.foo', config version is 6|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:49 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30000| Thu Jun 14 01:28:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 0.0 } -> { num: 11.0 }
m30000| Thu Jun 14 01:28:50 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:50-12", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651730001), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 11.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 24, step4 of 5: 0, step5 of 5: 987 } }
m30001| Thu Jun 14 01:28:50 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 11.0 }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 11, clonedBytes: 563805, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:28:50 [conn4] moveChunk updating self version to: 7|1||4fd97681a32f88a76fac6273 through { num: 11.0 } -> { num: 294.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:28:50 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:50-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651730006), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 11.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:28:50 [conn4] doing delete inline
m30001| Thu Jun 14 01:28:50 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:37 reslen:307 314ms
m30999| Thu Jun 14 01:28:50 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ns: "test.foo", version: Timestamp 6000|1, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), globalVersion: Timestamp 7000|0, globalVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), reloadConfig: true, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:28:50 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 7|1||4fd97681a32f88a76fac6273 based on: 6|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:50 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 7|1||4fd97681a32f88a76fac6273 based on: 7|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:50 [conn] warning: chunk manager reload forced for collection 'test.foo', config version is 7|1||4fd97681a32f88a76fac6273
m30999| Thu Jun 14 01:28:50 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|0, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0841e8
m30999| Thu Jun 14 01:28:50 [conn] setShardVersion success: { oldVersion: Timestamp 6000|0, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30999| Thu Jun 14 01:28:50 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('4fd97681a32f88a76fac6273'), serverID: ObjectId('4fd97680a32f88a76fac6271'), shard: "shard0001", shardHost: "localhost:30001" } 0xa084628
m30999| Thu Jun 14 01:28:50 [conn] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('4fd97681a32f88a76fac6273'), ok: 1.0 }
m30001| Thu Jun 14 01:28:50 [conn4] moveChunk deleted: 11
m30001| Thu Jun 14 01:28:50 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651713:710306523' unlocked.
m30001| Thu Jun 14 01:28:50 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:28:50-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58887", time: new Date(1339651730599), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 11.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 16, step6 of 6: 592 } }
m30001| Thu Jun 14 01:28:50 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 0.0 }, max: { num: 11.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:105966 w:577471 reslen:37 1612ms
m30999| Thu Jun 14 01:28:50 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:28:50 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:28:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651712:1804289383' unlocked.
checkpoint D
m30999| Thu Jun 14 01:28:50 [conn] couldn't find database [test2] in config db
m30999| Thu Jun 14 01:28:50 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 160 writeLock: 0
m30999| Thu Jun 14 01:28:50 [conn] put [test2] on: shard0000:localhost:30000
m30000| Thu Jun 14 01:28:50 [FileAllocator] allocating new datafile /data/db/auto20/test2.ns, filling with zeroes...
m30001| Thu Jun 14 01:28:51 [FileAllocator] done allocating datafile /data/db/auto21/test.4, size: 256MB, took 8.127 secs
m30000| Thu Jun 14 01:28:51 [FileAllocator] done allocating datafile /data/db/auto20/test2.ns, size: 16MB, took 0.921 secs
m30000| Thu Jun 14 01:28:51 [FileAllocator] allocating new datafile /data/db/auto20/test2.0, filling with zeroes...
m30000| Thu Jun 14 01:28:52 [FileAllocator] done allocating datafile /data/db/auto20/test2.0, size: 16MB, took 0.278 secs
m30000| Thu Jun 14 01:28:52 [conn10] build index test2.foobar { _id: 1 }
m30000| Thu Jun 14 01:28:52 [FileAllocator] allocating new datafile /data/db/auto20/test2.1, filling with zeroes...
m30000| Thu Jun 14 01:28:52 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:52 [conn10] update test2.foobar query: { _id: 0.0 } update: { _id: 0.0 } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) W:115 r:155116 w:1769515 1211ms
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
checkpoint E
{
"hosts" : {
"localhost:30000::0" : {
"available" : 2,
"created" : 3
},
"localhost:30000::30" : {
"available" : 1,
"created" : 1
},
"localhost:30001::0" : {
"available" : 2,
"created" : 3
}
},
"replicaSets" : {
},
"createdByType" : {
"master" : 7
},
"totalAvailable" : 5,
"totalCreated" : 7,
"numDBClientConnection" : 9,
"numAScopedConnection" : 111,
"ok" : 1
}
checkpoint F
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51095 #2 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:52 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:52 [initandlisten] connection accepted from 127.0.0.1:39009 #19 (19 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] connected connection!
m30999| Thu Jun 14 01:28:52 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:28:52 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:52 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:28:52 [initandlisten] connection accepted from 127.0.0.1:58900 #8 (8 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] connected connection!
m30999| Thu Jun 14 01:28:52 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51098 #3 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:52 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:28:52 [initandlisten] connection accepted from 127.0.0.1:39012 #20 (20 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] connected connection!
m30999| Thu Jun 14 01:28:52 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:28:52 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:52 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:28:52 [initandlisten] connection accepted from 127.0.0.1:58903 #9 (9 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] connected connection!
m30999| Thu Jun 14 01:28:52 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51095 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51101 #4 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51098 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51102 #5 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51101 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51103 #6 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51102 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51104 #7 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51103 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51105 #8 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51104 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51106 #9 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51105 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51107 #10 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51106 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51108 #11 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51107 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51109 #12 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51108 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51110 #13 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51109 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51111 #14 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51110 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51112 #15 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51111 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51113 #16 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51112 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51114 #17 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51113 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51115 #18 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51114 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51116 #19 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51115 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51117 #20 (3 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51116 (2 connections now open)
m30999| Thu Jun 14 01:28:52 [mongosMain] connection accepted from 127.0.0.1:51118 #21 (3 connections now open)
checkpoint G
m30999| Thu Jun 14 01:28:52 [conn] end connection 127.0.0.1:51117 (2 connections now open)
m30000| Thu Jun 14 01:28:52 [FileAllocator] done allocating datafile /data/db/auto20/test2.1, size: 32MB, took 0.708 secs
m30000| Thu Jun 14 01:28:52 [conn10] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.foo query:{ query: {}, orderby: { s: 1.0 } }
m30000| Thu Jun 14 01:28:52 [conn10] { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 }
m30000| Thu Jun 14 01:28:52 [conn10] query test.foo query: { query: {}, orderby: { s: 1.0 } } ntoreturn:0 keyUpdates:0 exception: too much data for sort() with no index. add an index or specify a smaller limit code:10128 locks(micros) W:115 r:356275 w:1772722 reslen:126 206ms
m30999| Thu Jun 14 01:28:52 [conn] warning: db exception when finishing on shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 7|1||4fd97681a32f88a76fac6273", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit
m30001| Thu Jun 14 01:28:52 [conn3] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.foo query:{ query: {}, orderby: { s: 1.0 } }
m30001| Thu Jun 14 01:28:52 [conn3] { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 }
m30001| Thu Jun 14 01:28:52 [conn3] query test.foo query: { query: {}, orderby: { s: 1.0 } } ntoreturn:0 keyUpdates:0 exception: too much data for sort() with no index. add an index or specify a smaller limit code:10128 locks(micros) r:220356 reslen:126 220ms
m30000| Thu Jun 14 01:28:52 [conn10] end connection 127.0.0.1:38994 (19 connections now open)
m30999| Thu Jun 14 01:28:52 [conn] AssertionException while processing op type : 2004 to : test.foo :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit
checkpoint H
m30999| Thu Jun 14 01:28:52 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:28:52 [conn3] end connection 127.0.0.1:38980 (18 connections now open)
m30000| Thu Jun 14 01:28:52 [conn20] end connection 127.0.0.1:39012 (17 connections now open)
m30000| Thu Jun 14 01:28:52 [conn19] end connection 127.0.0.1:39009 (16 connections now open)
m30000| Thu Jun 14 01:28:52 [conn16] end connection 127.0.0.1:39003 (15 connections now open)
m30000| Thu Jun 14 01:28:52 [conn5] end connection 127.0.0.1:38985 (14 connections now open)
m30001| Thu Jun 14 01:28:52 [conn8] end connection 127.0.0.1:58900 (8 connections now open)
m30001| Thu Jun 14 01:28:52 [conn9] end connection 127.0.0.1:58903 (7 connections now open)
m30001| Thu Jun 14 01:28:52 [conn6] end connection 127.0.0.1:58894 (6 connections now open)
m30001| Thu Jun 14 01:28:52 [conn4] end connection 127.0.0.1:58887 (5 connections now open)
m30001| Thu Jun 14 01:28:52 [conn3] end connection 127.0.0.1:58885 (4 connections now open)
Thu Jun 14 01:28:53 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:28:53 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:28:53 [conn7] end connection 127.0.0.1:38989 (13 connections now open)
m30000| Thu Jun 14 01:28:53 [conn6] end connection 127.0.0.1:38987 (13 connections now open)
m30000| Thu Jun 14 01:28:53 [conn9] end connection 127.0.0.1:38991 (12 connections now open)
m30000| Thu Jun 14 01:28:53 [conn8] end connection 127.0.0.1:38990 (12 connections now open)
m30001| Thu Jun 14 01:28:53 [conn7] end connection 127.0.0.1:58895 (3 connections now open)
m30000| Thu Jun 14 01:28:53 [conn17] end connection 127.0.0.1:39006 (9 connections now open)
Thu Jun 14 01:28:54 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:28:54 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:28:54 [interruptThread] now exiting
m30000| Thu Jun 14 01:28:54 dbexit:
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:28:54 [interruptThread] closing listening socket: 18
m30000| Thu Jun 14 01:28:54 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:28:54 [interruptThread] closing listening socket: 20
m30000| Thu Jun 14 01:28:54 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:28:54 [conn5] end connection 127.0.0.1:58889 (2 connections now open)
m30000| Thu Jun 14 01:28:54 [conn15] end connection 127.0.0.1:39002 (8 connections now open)
m30000| Thu Jun 14 01:28:54 [conn13] end connection 127.0.0.1:39000 (8 connections now open)
m30000| Thu Jun 14 01:28:54 [conn14] end connection 127.0.0.1:39001 (7 connections now open)
m30000| Thu Jun 14 01:28:54 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:28:54 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:28:54 dbexit: really exiting now
Thu Jun 14 01:28:55 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:28:55 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:28:55 [interruptThread] now exiting
m30001| Thu Jun 14 01:28:55 dbexit:
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:28:55 [interruptThread] closing listening socket: 21
m30001| Thu Jun 14 01:28:55 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:28:55 [interruptThread] closing listening socket: 23
m30001| Thu Jun 14 01:28:55 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:28:55 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:28:55 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:28:55 dbexit: really exiting now
Thu Jun 14 01:28:56 shell: stopped mongo program on port 30001
*** ShardingTest auto2 completed successfully in 25.393 seconds ***
25551.613808ms
Thu Jun 14 01:28:57 [initandlisten] connection accepted from 127.0.0.1:59033 #11 (10 connections now open)
*******************************************
Test : bad_config_load.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/bad_config_load.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/bad_config_load.js";TestData.testFile = "bad_config_load.js";TestData.testName = "bad_config_load";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:28:57 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:28:57 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:28:57
m30000| Thu Jun 14 01:28:57 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:28:57
m30000| Thu Jun 14 01:28:57 [initandlisten] MongoDB starting : pid=22501 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:28:57 [initandlisten]
m30000| Thu Jun 14 01:28:57 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:28:57 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:28:57 [initandlisten]
m30000| Thu Jun 14 01:28:57 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:28:57 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:28:57 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:28:57 [initandlisten]
m30000| Thu Jun 14 01:28:57 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:28:57 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:28:57 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:28:57 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:28:57 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:28:57 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
m30000| Thu Jun 14 01:28:57 [initandlisten] connection accepted from 127.0.0.1:39034 #1 (1 connection now open)
Thu Jun 14 01:28:57 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30001| Thu Jun 14 01:28:57
m30001| Thu Jun 14 01:28:57 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:28:57
m30001| Thu Jun 14 01:28:57 [initandlisten] MongoDB starting : pid=22514 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:28:57 [initandlisten]
m30001| Thu Jun 14 01:28:57 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:28:57 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:28:57 [initandlisten]
m30001| Thu Jun 14 01:28:57 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:28:57 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:28:57 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:28:57 [initandlisten]
m30001| Thu Jun 14 01:28:57 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:28:57 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:28:57 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:28:57 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:28:57 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:28:57 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:28:57 [initandlisten] connection accepted from 127.0.0.1:58926 #1 (1 connection now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:28:57 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:28:57 [initandlisten] connection accepted from 127.0.0.1:39037 #2 (2 connections now open)
m30000| Thu Jun 14 01:28:57 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:28:57 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Thu Jun 14 01:28:57 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:28:57 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22529 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:28:57 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:28:57 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:28:57 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:28:57 [initandlisten] connection accepted from 127.0.0.1:39039 #3 (3 connections now open)
m30000| Thu Jun 14 01:28:57 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.25 secs
m30000| Thu Jun 14 01:28:57 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:28:58 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.307 secs
m30000| Thu Jun 14 01:28:58 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn2] insert config.settings keyUpdates:0 locks(micros) w:575029 574ms
m30000| Thu Jun 14 01:28:58 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:39042 #4 (4 connections now open)
m30000| Thu Jun 14 01:28:58 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:39043 #5 (5 connections now open)
m30000| Thu Jun 14 01:28:58 [conn5] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn5] info: creating collection config.chunks on add index
m30999| Thu Jun 14 01:28:58 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:28:58 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:28:58 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:28:58 [conn5] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn5] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn5] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:28:58 [conn5] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:58 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:28:58 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:28:58
m30999| Thu Jun 14 01:28:58 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:39044 #6 (6 connections now open)
m30000| Thu Jun 14 01:28:58 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:28:58 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:58 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651738:1804289383' acquired, ts : 4fd9769a84f86c81bdad06ed
m30999| Thu Jun 14 01:28:58 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651738:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:28:58 [conn5] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:58 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651738:1804289383' unlocked.
m30000| Thu Jun 14 01:28:58 [conn5] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:28:58 [mongosMain] connection accepted from 127.0.0.1:51132 #1 (1 connection now open)
m30999| Thu Jun 14 01:28:58 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:28:58 [conn5] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:28:58 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:28:58 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:28:58 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:58936 #2 (2 connections now open)
m30999| Thu Jun 14 01:28:58 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:28:58 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:28:58 [conn] found 0 dropped collections and 0 sharded collections for database admin
m30999| Thu Jun 14 01:28:58 [conn] going to start draining shard: shard0000
m30999| primaryLocalDoc: { _id: "local", primary: "shard0000" }
m30999| Thu Jun 14 01:28:58 [conn] going to remove shard: shard0000
----
Setup complete!
----
m30999| Thu Jun 14 01:28:58 [conn] Removing ReplicaSetMonitor for shard0000 from replica set table
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:28:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:58 [conn] connected connection!
m30999| Thu Jun 14 01:28:58 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9769a84f86c81bdad06ec
m30999| Thu Jun 14 01:28:58 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:28:58 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd9769a84f86c81bdad06ec'), authoritative: true }
m30999| Thu Jun 14 01:28:58 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:28:58 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:28:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:28:58 [conn] connected connection!
m30999| Thu Jun 14 01:28:58 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9769a84f86c81bdad06ec
m30999| Thu Jun 14 01:28:58 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:28:58 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd9769a84f86c81bdad06ec'), authoritative: true }
m30999| Thu Jun 14 01:28:58 BackgroundJob starting: WriteBackListener-localhost:30000
m30001| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:58937 #3 (3 connections now open)
m30000| Thu Jun 14 01:28:58 [initandlisten] connection accepted from 127.0.0.1:39048 #7 (7 connections now open)
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing over 1 shards
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
----
Stopping 30000...
----
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:28:58 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:28:58 [WriteBackListener-localhost:30000] localhost:30000 is not a shard node
m30000| Thu Jun 14 01:28:58 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:28:58 [interruptThread] now exiting
m30000| Thu Jun 14 01:28:58 dbexit:
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:28:58 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:28:58 [interruptThread] closing listening socket: 20
m30000| Thu Jun 14 01:28:58 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:28:58 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:28:58 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.613 secs
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:28:58 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:28:58 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:28:58 dbexit: really exiting now
Thu Jun 14 01:28:59 shell: stopped mongo program on port 30000
----
Config flushed and config server down!
----
m30999| Thu Jun 14 01:28:59 [conn] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000]
m30999| Thu Jun 14 01:28:59 [conn] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:28:59 [conn] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
"error {\n\t\"$err\" : \"error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: \\\"foo\\\" }\",\n\t\"code\" : 10276\n}"
"error {\n\t\"$err\" : \"error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: \\\"foo\\\" }\",\n\t\"code\" : 10276\n}"
----
Done!
----
m30999| Thu Jun 14 01:28:59 [conn] warning: error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
m30999| Thu Jun 14 01:28:59 [conn] AssertionException while processing op type : 2004 to : foo.bar :: caused by :: 10276 error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
m30999| Thu Jun 14 01:28:59 [conn] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000]
m30999| Thu Jun 14 01:28:59 [conn] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:28:59 [conn] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
m30999| Thu Jun 14 01:28:59 [conn] warning: error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
m30999| Thu Jun 14 01:28:59 [conn] AssertionException while processing op type : 2004 to : foo.bar :: caused by :: 10276 error loading initial database config information :: caused by :: DBClientBase::findN: transport error: localhost:30000 ns: config.databases query: { _id: "foo" }
m30999| Thu Jun 14 01:28:59 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:28:59 [conn3] end connection 127.0.0.1:58937 (2 connections now open)
Thu Jun 14 01:29:00 shell: stopped mongo program on port 30999
Thu Jun 14 01:29:00 No db started on port: 30000
Thu Jun 14 01:29:00 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:29:00 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:29:00 [interruptThread] now exiting
m30001| Thu Jun 14 01:29:00 dbexit:
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:29:00 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:29:00 [interruptThread] closing listening socket: 23
m30001| Thu Jun 14 01:29:00 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:29:00 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:29:00 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:29:00 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:29:00 dbexit: really exiting now
Thu Jun 14 01:29:01 shell: stopped mongo program on port 30001
*** ShardingTest test completed successfully in 4.142 seconds ***
4227.575064ms
Thu Jun 14 01:29:01 [initandlisten] connection accepted from 127.0.0.1:59050 #12 (11 connections now open)
*******************************************
Test : bouncing_count.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/bouncing_count.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/bouncing_count.js";TestData.testFile = "bouncing_count.js";TestData.testName = "bouncing_count";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:29:01 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:29:01 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:29:01
m30000| Thu Jun 14 01:29:01 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:29:01
m30000| Thu Jun 14 01:29:01 [initandlisten] MongoDB starting : pid=22562 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:29:01 [initandlisten]
m30000| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:29:01 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:29:01 [initandlisten]
m30000| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:29:01 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:29:01 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:29:01 [initandlisten]
m30000| Thu Jun 14 01:29:01 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:29:01 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:29:01 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:29:01 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:29:01 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:29:01 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:29:01 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:29:01 [initandlisten] connection accepted from 127.0.0.1:39051 #1 (1 connection now open)
m30001| Thu Jun 14 01:29:01
m30001| Thu Jun 14 01:29:01 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:29:01
m30001| Thu Jun 14 01:29:01 [initandlisten] MongoDB starting : pid=22575 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:29:01 [initandlisten]
m30001| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:29:01 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:29:01 [initandlisten]
m30001| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:29:01 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:29:01 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:29:01 [initandlisten]
m30001| Thu Jun 14 01:29:01 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:29:01 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:29:01 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:29:01 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:29:01 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:29:01 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test2'
m30001| Thu Jun 14 01:29:01 [initandlisten] connection accepted from 127.0.0.1:58943 #1 (1 connection now open)
Thu Jun 14 01:29:01 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/test2
m30002| Thu Jun 14 01:29:01
m30002| Thu Jun 14 01:29:01 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:29:01
m30002| Thu Jun 14 01:29:01 [initandlisten] MongoDB starting : pid=22588 port=30002 dbpath=/data/db/test2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:29:01 [initandlisten]
m30002| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:29:01 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:29:01 [initandlisten]
m30002| Thu Jun 14 01:29:01 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:29:01 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:29:01 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:29:01 [initandlisten]
m30002| Thu Jun 14 01:29:01 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:29:01 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:29:01 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:29:01 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
m30002| Thu Jun 14 01:29:01 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:29:01 [websvr] admin web console waiting for connections on port 31002
Resetting db path '/data/db/test3'
m30002| Thu Jun 14 01:29:02 [initandlisten] connection accepted from 127.0.0.1:46574 #1 (1 connection now open)
Thu Jun 14 01:29:02 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30003 --dbpath /data/db/test3
m30003| Thu Jun 14 01:29:02
m30003| Thu Jun 14 01:29:02 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30003| Thu Jun 14 01:29:02
m30003| Thu Jun 14 01:29:02 [initandlisten] MongoDB starting : pid=22601 port=30003 dbpath=/data/db/test3 32-bit host=domU-12-31-39-01-70-B4
m30003| Thu Jun 14 01:29:02 [initandlisten]
m30003| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30003| Thu Jun 14 01:29:02 [initandlisten] ** Not recommended for production.
m30003| Thu Jun 14 01:29:02 [initandlisten]
m30003| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30003| Thu Jun 14 01:29:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30003| Thu Jun 14 01:29:02 [initandlisten] ** with --journal, the limit is lower
m30003| Thu Jun 14 01:29:02 [initandlisten]
m30003| Thu Jun 14 01:29:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30003| Thu Jun 14 01:29:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30003| Thu Jun 14 01:29:02 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30003| Thu Jun 14 01:29:02 [initandlisten] options: { dbpath: "/data/db/test3", port: 30003 }
m30003| Thu Jun 14 01:29:02 [initandlisten] waiting for connections on port 30003
m30003| Thu Jun 14 01:29:02 [websvr] admin web console waiting for connections on port 31003
Resetting db path '/data/db/test4'
m30003| Thu Jun 14 01:29:02 [initandlisten] connection accepted from 127.0.0.1:57622 #1 (1 connection now open)
Thu Jun 14 01:29:02 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30004 --dbpath /data/db/test4
m30004| Thu Jun 14 01:29:02
m30004| Thu Jun 14 01:29:02 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30004| Thu Jun 14 01:29:02
m30004| Thu Jun 14 01:29:02 [initandlisten] MongoDB starting : pid=22614 port=30004 dbpath=/data/db/test4 32-bit host=domU-12-31-39-01-70-B4
m30004| Thu Jun 14 01:29:02 [initandlisten]
m30004| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30004| Thu Jun 14 01:29:02 [initandlisten] ** Not recommended for production.
m30004| Thu Jun 14 01:29:02 [initandlisten]
m30004| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30004| Thu Jun 14 01:29:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30004| Thu Jun 14 01:29:02 [initandlisten] ** with --journal, the limit is lower
m30004| Thu Jun 14 01:29:02 [initandlisten]
m30004| Thu Jun 14 01:29:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30004| Thu Jun 14 01:29:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30004| Thu Jun 14 01:29:02 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30004| Thu Jun 14 01:29:02 [initandlisten] options: { dbpath: "/data/db/test4", port: 30004 }
m30004| Thu Jun 14 01:29:02 [initandlisten] waiting for connections on port 30004
m30004| Thu Jun 14 01:29:02 [websvr] admin web console waiting for connections on port 31004
Resetting db path '/data/db/test5'
m30004| Thu Jun 14 01:29:02 [initandlisten] connection accepted from 127.0.0.1:52256 #1 (1 connection now open)
Thu Jun 14 01:29:02 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30005 --dbpath /data/db/test5
m30005| Thu Jun 14 01:29:02
m30005| Thu Jun 14 01:29:02 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30005| Thu Jun 14 01:29:02
m30005| Thu Jun 14 01:29:02 [initandlisten] MongoDB starting : pid=22627 port=30005 dbpath=/data/db/test5 32-bit host=domU-12-31-39-01-70-B4
m30005| Thu Jun 14 01:29:02 [initandlisten]
m30005| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30005| Thu Jun 14 01:29:02 [initandlisten] ** Not recommended for production.
m30005| Thu Jun 14 01:29:02 [initandlisten]
m30005| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30005| Thu Jun 14 01:29:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30005| Thu Jun 14 01:29:02 [initandlisten] ** with --journal, the limit is lower
m30005| Thu Jun 14 01:29:02 [initandlisten]
m30005| Thu Jun 14 01:29:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30005| Thu Jun 14 01:29:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30005| Thu Jun 14 01:29:02 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30005| Thu Jun 14 01:29:02 [initandlisten] options: { dbpath: "/data/db/test5", port: 30005 }
m30005| Thu Jun 14 01:29:02 [initandlisten] waiting for connections on port 30005
m30005| Thu Jun 14 01:29:02 [websvr] admin web console waiting for connections on port 31005
Resetting db path '/data/db/test6'
m30005| Thu Jun 14 01:29:02 [initandlisten] connection accepted from 127.0.0.1:36048 #1 (1 connection now open)
Thu Jun 14 01:29:02 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30006 --dbpath /data/db/test6
m30006| Thu Jun 14 01:29:02
m30006| Thu Jun 14 01:29:02 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30006| Thu Jun 14 01:29:02
m30006| Thu Jun 14 01:29:02 [initandlisten] MongoDB starting : pid=22640 port=30006 dbpath=/data/db/test6 32-bit host=domU-12-31-39-01-70-B4
m30006| Thu Jun 14 01:29:02 [initandlisten]
m30006| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30006| Thu Jun 14 01:29:02 [initandlisten] ** Not recommended for production.
m30006| Thu Jun 14 01:29:02 [initandlisten]
m30006| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30006| Thu Jun 14 01:29:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30006| Thu Jun 14 01:29:02 [initandlisten] ** with --journal, the limit is lower
m30006| Thu Jun 14 01:29:02 [initandlisten]
m30006| Thu Jun 14 01:29:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30006| Thu Jun 14 01:29:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30006| Thu Jun 14 01:29:02 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30006| Thu Jun 14 01:29:02 [initandlisten] options: { dbpath: "/data/db/test6", port: 30006 }
m30006| Thu Jun 14 01:29:02 [initandlisten] waiting for connections on port 30006
m30006| Thu Jun 14 01:29:02 [websvr] admin web console waiting for connections on port 31006
Resetting db path '/data/db/test7'
m30006| Thu Jun 14 01:29:02 [initandlisten] connection accepted from 127.0.0.1:57753 #1 (1 connection now open)
Thu Jun 14 01:29:02 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30007 --dbpath /data/db/test7
m30007| Thu Jun 14 01:29:02
m30007| Thu Jun 14 01:29:02 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30007| Thu Jun 14 01:29:02
m30007| Thu Jun 14 01:29:02 [initandlisten] MongoDB starting : pid=22653 port=30007 dbpath=/data/db/test7 32-bit host=domU-12-31-39-01-70-B4
m30007| Thu Jun 14 01:29:02 [initandlisten]
m30007| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30007| Thu Jun 14 01:29:02 [initandlisten] ** Not recommended for production.
m30007| Thu Jun 14 01:29:02 [initandlisten]
m30007| Thu Jun 14 01:29:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30007| Thu Jun 14 01:29:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30007| Thu Jun 14 01:29:02 [initandlisten] ** with --journal, the limit is lower
m30007| Thu Jun 14 01:29:02 [initandlisten]
m30007| Thu Jun 14 01:29:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30007| Thu Jun 14 01:29:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30007| Thu Jun 14 01:29:02 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30007| Thu Jun 14 01:29:02 [initandlisten] options: { dbpath: "/data/db/test7", port: 30007 }
m30007| Thu Jun 14 01:29:02 [initandlisten] waiting for connections on port 30007
m30007| Thu Jun 14 01:29:02 [websvr] admin web console waiting for connections on port 31007
Resetting db path '/data/db/test8'
m30007| Thu Jun 14 01:29:03 [initandlisten] connection accepted from 127.0.0.1:56441 #1 (1 connection now open)
Thu Jun 14 01:29:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30008 --dbpath /data/db/test8
m30008| Thu Jun 14 01:29:03
m30008| Thu Jun 14 01:29:03 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30008| Thu Jun 14 01:29:03
m30008| Thu Jun 14 01:29:03 [initandlisten] MongoDB starting : pid=22666 port=30008 dbpath=/data/db/test8 32-bit host=domU-12-31-39-01-70-B4
m30008| Thu Jun 14 01:29:03 [initandlisten]
m30008| Thu Jun 14 01:29:03 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30008| Thu Jun 14 01:29:03 [initandlisten] ** Not recommended for production.
m30008| Thu Jun 14 01:29:03 [initandlisten]
m30008| Thu Jun 14 01:29:03 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30008| Thu Jun 14 01:29:03 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30008| Thu Jun 14 01:29:03 [initandlisten] ** with --journal, the limit is lower
m30008| Thu Jun 14 01:29:03 [initandlisten]
m30008| Thu Jun 14 01:29:03 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30008| Thu Jun 14 01:29:03 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30008| Thu Jun 14 01:29:03 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30008| Thu Jun 14 01:29:03 [initandlisten] options: { dbpath: "/data/db/test8", port: 30008 }
m30008| Thu Jun 14 01:29:03 [initandlisten] waiting for connections on port 30008
m30008| Thu Jun 14 01:29:03 [websvr] admin web console waiting for connections on port 31008
Resetting db path '/data/db/test9'
m30008| Thu Jun 14 01:29:03 [initandlisten] connection accepted from 127.0.0.1:51883 #1 (1 connection now open)
Thu Jun 14 01:29:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30009 --dbpath /data/db/test9
m30009| Thu Jun 14 01:29:03
m30009| Thu Jun 14 01:29:03 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30009| Thu Jun 14 01:29:03
m30009| Thu Jun 14 01:29:03 [initandlisten] MongoDB starting : pid=22679 port=30009 dbpath=/data/db/test9 32-bit host=domU-12-31-39-01-70-B4
m30009| Thu Jun 14 01:29:03 [initandlisten]
m30009| Thu Jun 14 01:29:03 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30009| Thu Jun 14 01:29:03 [initandlisten] ** Not recommended for production.
m30009| Thu Jun 14 01:29:03 [initandlisten]
m30009| Thu Jun 14 01:29:03 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30009| Thu Jun 14 01:29:03 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30009| Thu Jun 14 01:29:03 [initandlisten] ** with --journal, the limit is lower
m30009| Thu Jun 14 01:29:03 [initandlisten]
m30009| Thu Jun 14 01:29:03 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30009| Thu Jun 14 01:29:03 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30009| Thu Jun 14 01:29:03 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30009| Thu Jun 14 01:29:03 [initandlisten] options: { dbpath: "/data/db/test9", port: 30009 }
m30009| Thu Jun 14 01:29:03 [initandlisten] waiting for connections on port 30009
m30009| Thu Jun 14 01:29:03 [websvr] admin web console waiting for connections on port 31009
"localhost:30000"
m30009| Thu Jun 14 01:29:03 [initandlisten] connection accepted from 127.0.0.1:46267 #1 (1 connection now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002,
connection to localhost:30003,
connection to localhost:30004,
connection to localhost:30005,
connection to localhost:30006,
connection to localhost:30007,
connection to localhost:30008,
connection to localhost:30009
]
}
m30000| Thu Jun 14 01:29:03 [initandlisten] connection accepted from 127.0.0.1:39070 #2 (2 connections now open)
m30000| Thu Jun 14 01:29:03 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:03 [FileAllocator] creating directory /data/db/test0/_tmp
Thu Jun 14 01:29:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Thu Jun 14 01:29:03 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:29:03 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22694 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:29:03 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:29:03 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:29:03 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:29:03 [initandlisten] connection accepted from 127.0.0.1:39072 #3 (3 connections now open)
m30000| Thu Jun 14 01:29:03 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.268 secs
m30000| Thu Jun 14 01:29:03 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:29:04 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.29 secs
m30000| Thu Jun 14 01:29:04 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn2] insert config.settings keyUpdates:0 locks(micros) w:579482 579ms
m30000| Thu Jun 14 01:29:04 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39075 #4 (4 connections now open)
m30000| Thu Jun 14 01:29:04 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:29:04 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:29:04 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39076 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:04 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:04 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:29:04 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:29:04 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:29:04 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:29:04 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:29:04 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:29:04
m30999| Thu Jun 14 01:29:04 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651744:1804289383' acquired, ts : 4fd976a08c7a5fd108c1eeb0
m30999| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651744:1804289383' unlocked.
m30999| Thu Jun 14 01:29:04 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651744:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:04 [mongosMain] connection accepted from 127.0.0.1:51164 #1 (1 connection now open)
Thu Jun 14 01:29:04 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39078 #6 (6 connections now open)
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39079 #7 (7 connections now open)
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39080 #8 (8 connections now open)
Thu Jun 14 01:29:04 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30997 --configdb localhost:30000
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39082 #9 (9 connections now open)
m30997| Thu Jun 14 01:29:04 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30997| Thu Jun 14 01:29:04 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22729 port=30997 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30997| Thu Jun 14 01:29:04 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30997| Thu Jun 14 01:29:04 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30997| Thu Jun 14 01:29:04 [mongosMain] options: { configdb: "localhost:30000", port: 30997 }
m30997| Thu Jun 14 01:29:04 [websvr] admin web console waiting for connections on port 31997
m30997| Thu Jun 14 01:29:04 [Balancer] about to contact config servers and shards
m30997| Thu Jun 14 01:29:04 [mongosMain] waiting for connections on port 30997
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39084 #10 (10 connections now open)
m30997| Thu Jun 14 01:29:04 [Balancer] config servers and shards contacted successfully
m30997| Thu Jun 14 01:29:04 [Balancer] balancer id: domU-12-31-39-01-70-B4:30997 started at Jun 14 01:29:04
m30997| Thu Jun 14 01:29:04 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39085 #11 (11 connections now open)
m30997| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651744:1804289383' acquired, ts : 4fd976a0db1638f412575830
m30997| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651744:1804289383' unlocked.
m30997| Thu Jun 14 01:29:04 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30997:1339651744:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:29:04 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:29:04 [mongosMain] MongoS version 2.1.2-pre- starting: pid=22713 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:29:04 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:29:04 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:29:04 [mongosMain] options: { configdb: "localhost:30000", port: 30998 }
m30998| Thu Jun 14 01:29:04 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:29:04 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:29:04 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:29:04 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:29:04 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:29:04
m30998| Thu Jun 14 01:29:04 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651744:1804289383' acquired, ts : 4fd976a0f3078a2c877d8227
m30998| Thu Jun 14 01:29:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651744:1804289383' unlocked.
m30998| Thu Jun 14 01:29:04 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339651744:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:29:04 [mongosMain] connection accepted from 127.0.0.1:35500 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:29:04 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:29:04 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:29:04 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30997| Thu Jun 14 01:29:04 [mongosMain] connection accepted from 127.0.0.1:51923 #1 (1 connection now open)
m30000| Thu Jun 14 01:29:04 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.623 secs
m30000| Thu Jun 14 01:29:04 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:305 w:1435 reslen:177 290ms
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:58977 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:46607 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30003
m30003| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:57654 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0003", host: "localhost:30003" }
{ "shardAdded" : "shard0003", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30004
m30004| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:52287 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0004", host: "localhost:30004" }
{ "shardAdded" : "shard0004", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30005
m30005| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:36078 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0005", host: "localhost:30005" }
{ "shardAdded" : "shard0005", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30006
m30006| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:57782 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0006", host: "localhost:30006" }
{ "shardAdded" : "shard0006", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30007
m30007| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:56469 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0007", host: "localhost:30007" }
{ "shardAdded" : "shard0007", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30008
m30008| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:51910 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0008", host: "localhost:30008" }
{ "shardAdded" : "shard0008", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30009
m30009| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:46293 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] going to add shard: { _id: "shard0009", host: "localhost:30009" }
{ "shardAdded" : "shard0009", "ok" : 1 }
m30999| Thu Jun 14 01:29:04 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:29:04 [conn] put [foo] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:29:04 [conn] enabling sharding on: foo
m30999| Thu Jun 14 01:29:04 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:29:04 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30000| Thu Jun 14 01:29:04 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:29:04 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:04 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:29:04 [FileAllocator] creating directory /data/db/test1/_tmp
m30999| Thu Jun 14 01:29:04 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd976a08c7a5fd108c1eeb1 based on: (empty)
m30999| Thu Jun 14 01:29:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976a08c7a5fd108c1eeaf
m30999| Thu Jun 14 01:29:04 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30000| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:39096 #12 (12 connections now open)
m30999| Thu Jun 14 01:29:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976a08c7a5fd108c1eeaf
m30001| Thu Jun 14 01:29:04 [initandlisten] connection accepted from 127.0.0.1:58987 #3 (3 connections now open)
m30001| Thu Jun 14 01:29:05 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.282 secs
m30001| Thu Jun 14 01:29:05 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:29:05 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.325 secs
m30001| Thu Jun 14 01:29:05 [conn2] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:29:05 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:05 [conn2] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:29:05 [conn2] insert foo.system.indexes keyUpdates:0 locks(micros) R:9 W:75 r:307 w:623408 623ms
m30001| Thu Jun 14 01:29:05 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd976a08c7a5fd108c1eeaf'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:72 reslen:51 620ms
m30001| Thu Jun 14 01:29:05 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:29:05 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd976a08c7a5fd108c1eeaf
m30002| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:46618 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30002, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30003 serverID: 4fd976a08c7a5fd108c1eeaf
m30003| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:57665 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30003, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30004 serverID: 4fd976a08c7a5fd108c1eeaf
m30004| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:52298 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30004, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30005 serverID: 4fd976a08c7a5fd108c1eeaf
m30005| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:36089 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30005, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30006 serverID: 4fd976a08c7a5fd108c1eeaf
m30006| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:57793 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30006, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30007 serverID: 4fd976a08c7a5fd108c1eeaf
m30007| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:56480 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30007, version is zero
m30000| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:39098 #13 (13 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30008 serverID: 4fd976a08c7a5fd108c1eeaf
m30008| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:51921 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30008, version is zero
m30999| Thu Jun 14 01:29:05 [conn] creating WriteBackListener for: localhost:30009 serverID: 4fd976a08c7a5fd108c1eeaf
m30009| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:46304 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:05 [conn] resetting shard version of foo.bar on localhost:30009, version is zero
----
Splitting up the collection...
----
m30999| Thu Jun 14 01:29:05 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:58997 #4 (4 connections now open)
m30000| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:39108 #14 (14 connections now open)
m30001| Thu Jun 14 01:29:05 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:05 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:05 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' acquired, ts : 4fd976a15b3204a1e3977209
m30001| Thu Jun 14 01:29:05 [conn4] splitChunk accepted at version 1|0||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:05-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651745402), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30001| Thu Jun 14 01:29:05 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' unlocked.
{ "ok" : 1 }
m30001| Thu Jun 14 01:29:05 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651745:1408930224 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:05 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd976a08c7a5fd108c1eeb1 based on: 1|0||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:05 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:29:05 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:29:05 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:05 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:05 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' acquired, ts : 4fd976a15b3204a1e397720a
m30001| Thu Jun 14 01:29:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:05-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651745407), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:29:05 [conn4] moveChunk request accepted at version 1|2||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:05 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:29:05 [initandlisten] connection accepted from 127.0.0.1:58999 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:05 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:29:06 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.968 secs
m30000| Thu Jun 14 01:29:06 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.937 secs
m30000| Thu Jun 14 01:29:06 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:29:06 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:29:06 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.286 secs
m30000| Thu Jun 14 01:29:06 [migrateThread] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:29:06 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:06 [migrateThread] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:29:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:29:06 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:29:07 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.647 secs
m30001| Thu Jun 14 01:29:07 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:29:07 [conn4] moveChunk setting version to: 2|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:29:07 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:07-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651747418), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 5: 1245, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 764 } }
m30000| Thu Jun 14 01:29:07 [initandlisten] connection accepted from 127.0.0.1:39110 #15 (15 connections now open)
{ "millis" : 2019, "ok" : 1 }
m30001| Thu Jun 14 01:29:07 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:29:07 [conn4] moveChunk updating self version to: 2|1||4fd976a08c7a5fd108c1eeb1 through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30001| Thu Jun 14 01:29:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:07-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651747423), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:29:07 [conn4] doing delete inline
m30001| Thu Jun 14 01:29:07 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:29:07 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' unlocked.
m30001| Thu Jun 14 01:29:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:07-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651747423), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:29:07 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:85 w:49 reslen:37 2018ms
m30999| Thu Jun 14 01:29:07 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 2|1||4fd976a08c7a5fd108c1eeb1 based on: 1|2||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:07 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30999| Thu Jun 14 01:29:07 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 2|3||4fd976a08c7a5fd108c1eeb1 based on: 2|1||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:07 [initandlisten] connection accepted from 127.0.0.1:39111 #16 (16 connections now open)
m30000| Thu Jun 14 01:29:07 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:29:07 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:07 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' acquired, ts : 4fd976a3ed7d681b17aa21ae
m30000| Thu Jun 14 01:29:07 [conn5] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:29:07 [conn5] splitChunk accepted at version 2|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:07 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:07-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39076", time: new Date(1339651747430), what: "split", ns: "foo.bar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30000| Thu Jun 14 01:29:07 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' unlocked.
m30000| Thu Jun 14 01:29:07 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651747:872239924 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:07 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 1.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:29:07 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 2|3||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:29:07 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:29:07 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:07 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' acquired, ts : 4fd976a3ed7d681b17aa21af
m30000| Thu Jun 14 01:29:07 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:07-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39076", time: new Date(1339651747434), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:29:07 [conn5] moveChunk request accepted at version 2|3||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:07 [conn5] moveChunk number of documents: 0
m30001| Thu Jun 14 01:29:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:29:08 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:29:08 [conn5] moveChunk setting version to: 3|0||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:29:08 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:08-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651748442), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1006 } }
m30000| Thu Jun 14 01:29:08 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:29:08 [conn5] moveChunk updating self version to: 3|1||4fd976a08c7a5fd108c1eeb1 through { _id: 0.0 } -> { _id: 1.0 } for collection 'foo.bar'
m30000| Thu Jun 14 01:29:08 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:08-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39076", time: new Date(1339651748447), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:29:08 [conn5] doing delete inline
m30000| Thu Jun 14 01:29:08 [conn5] moveChunk deleted: 0
m30000| Thu Jun 14 01:29:08 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' unlocked.
m30000| Thu Jun 14 01:29:08 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:08-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39076", time: new Date(1339651748447), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 0 } }
m30000| Thu Jun 14 01:29:08 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1422 w:986 reslen:37 1014ms
m30999| Thu Jun 14 01:29:08 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 3|1||4fd976a08c7a5fd108c1eeb1 based on: 2|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1016, "ok" : 1 }
m30999| Thu Jun 14 01:29:08 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:29:08 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2.0 } ], shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:08 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:08 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' acquired, ts : 4fd976a45b3204a1e397720b
m30001| Thu Jun 14 01:29:08 [conn4] splitChunk accepted at version 3|0||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:08-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651748452), what: "split", ns: "foo.bar", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30001| Thu Jun 14 01:29:08 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' unlocked.
m30999| Thu Jun 14 01:29:08 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 3|3||4fd976a08c7a5fd108c1eeb1 based on: 3|1||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:08 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 2.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:29:08 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { _id: 2.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:29:08 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:08 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:08 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' acquired, ts : 4fd976a45b3204a1e397720c
m30001| Thu Jun 14 01:29:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:08-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651748456), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:29:08 [conn4] moveChunk request accepted at version 3|3||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:08 [conn4] moveChunk number of documents: 0
m30002| Thu Jun 14 01:29:08 [initandlisten] connection accepted from 127.0.0.1:46631 #4 (4 connections now open)
m30001| Thu Jun 14 01:29:08 [initandlisten] connection accepted from 127.0.0.1:59003 #6 (6 connections now open)
m30002| Thu Jun 14 01:29:08 [FileAllocator] allocating new datafile /data/db/test2/foo.ns, filling with zeroes...
m30002| Thu Jun 14 01:29:08 [FileAllocator] creating directory /data/db/test2/_tmp
m30002| Thu Jun 14 01:29:08 [FileAllocator] done allocating datafile /data/db/test2/foo.ns, size: 16MB, took 0.229 secs
m30002| Thu Jun 14 01:29:08 [FileAllocator] allocating new datafile /data/db/test2/foo.0, filling with zeroes...
m30002| Thu Jun 14 01:29:08 [FileAllocator] done allocating datafile /data/db/test2/foo.0, size: 16MB, took 0.289 secs
m30002| Thu Jun 14 01:29:08 [migrateThread] build index foo.bar { _id: 1 }
m30002| Thu Jun 14 01:29:08 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:29:08 [migrateThread] info: creating collection foo.bar on add index
m30002| Thu Jun 14 01:29:08 [FileAllocator] allocating new datafile /data/db/test2/foo.1, filling with zeroes...
m30002| Thu Jun 14 01:29:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 2.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:29:09 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:29:09 [conn4] moveChunk setting version to: 4|0||4fd976a08c7a5fd108c1eeb1
m30002| Thu Jun 14 01:29:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 2.0 } -> { _id: MaxKey }
m30002| Thu Jun 14 01:29:09 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:09-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651749462), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: MaxKey }, step1 of 5: 533, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 470 } }
m30000| Thu Jun 14 01:29:09 [initandlisten] connection accepted from 127.0.0.1:39114 #17 (17 connections now open)
m30001| Thu Jun 14 01:29:09 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:29:09 [conn4] moveChunk updating self version to: 4|1||4fd976a08c7a5fd108c1eeb1 through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30001| Thu Jun 14 01:29:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:09-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651749467), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:29:09 [conn4] doing delete inline
m30001| Thu Jun 14 01:29:09 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:29:09 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' unlocked.
m30001| Thu Jun 14 01:29:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:09-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651749467), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:29:09 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:139 w:90 reslen:37 1012ms
m30999| Thu Jun 14 01:29:09 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 4|1||4fd976a08c7a5fd108c1eeb1 based on: 3|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1013, "ok" : 1 }
m30999| Thu Jun 14 01:29:09 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0002:localhost:30002 lastmod: 4|0||000000000000000000000000 min: { _id: 2.0 } max: { _id: MaxKey }
m30002| Thu Jun 14 01:29:09 [initandlisten] connection accepted from 127.0.0.1:46634 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:09 [initandlisten] connection accepted from 127.0.0.1:39116 #18 (18 connections now open)
m30002| Thu Jun 14 01:29:09 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: 3.0 } ], shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" }
m30002| Thu Jun 14 01:29:09 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:29:09 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' acquired, ts : 4fd976a5074758c3ce75c4db
m30002| Thu Jun 14 01:29:09 [conn5] no current chunk manager found for this shard, will initialize
m30002| Thu Jun 14 01:29:09 [conn5] splitChunk accepted at version 4|0||4fd976a08c7a5fd108c1eeb1
m30002| Thu Jun 14 01:29:09 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:09-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651749474), what: "split", ns: "foo.bar", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30002| Thu Jun 14 01:29:09 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' unlocked.
m30002| Thu Jun 14 01:29:09 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30002:1339651749:891230109 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:09 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 9 version: 4|3||4fd976a08c7a5fd108c1eeb1 based on: 4|1||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:09 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 3.0 }, to: "shard0003" }
m30999| Thu Jun 14 01:29:09 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0002:localhost:30002 lastmod: 4|3||000000000000000000000000 min: { _id: 3.0 } max: { _id: MaxKey }) shard0002:localhost:30002 -> shard0003:localhost:30003
m30002| Thu Jun 14 01:29:09 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30002", to: "localhost:30003", fromShard: "shard0002", toShard: "shard0003", min: { _id: 3.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_3.0", configdb: "localhost:30000" }
m30002| Thu Jun 14 01:29:09 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:29:09 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' acquired, ts : 4fd976a5074758c3ce75c4dc
m30002| Thu Jun 14 01:29:09 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:09-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651749479), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0002", to: "shard0003" } }
m30002| Thu Jun 14 01:29:09 [conn5] moveChunk request accepted at version 4|3||4fd976a08c7a5fd108c1eeb1
m30002| Thu Jun 14 01:29:09 [conn5] moveChunk number of documents: 0
m30003| Thu Jun 14 01:29:09 [initandlisten] connection accepted from 127.0.0.1:57682 #4 (4 connections now open)
m30002| Thu Jun 14 01:29:09 [initandlisten] connection accepted from 127.0.0.1:46637 #6 (6 connections now open)
m30003| Thu Jun 14 01:29:09 [FileAllocator] allocating new datafile /data/db/test3/foo.ns, filling with zeroes...
m30003| Thu Jun 14 01:29:09 [FileAllocator] creating directory /data/db/test3/_tmp
m30002| Thu Jun 14 01:29:09 [FileAllocator] done allocating datafile /data/db/test2/foo.1, size: 32MB, took 0.601 secs
m30003| Thu Jun 14 01:29:09 [FileAllocator] done allocating datafile /data/db/test3/foo.ns, size: 16MB, took 0.293 secs
m30003| Thu Jun 14 01:29:09 [FileAllocator] allocating new datafile /data/db/test3/foo.0, filling with zeroes...
m30003| Thu Jun 14 01:29:10 [FileAllocator] done allocating datafile /data/db/test3/foo.0, size: 16MB, took 0.319 secs
m30003| Thu Jun 14 01:29:10 [FileAllocator] allocating new datafile /data/db/test3/foo.1, filling with zeroes...
m30003| Thu Jun 14 01:29:10 [migrateThread] build index foo.bar { _id: 1 }
m30003| Thu Jun 14 01:29:10 [migrateThread] build index done. scanned 0 total records. 0 secs
m30003| Thu Jun 14 01:29:10 [migrateThread] info: creating collection foo.bar on add index
m30003| Thu Jun 14 01:29:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 3.0 } -> { _id: MaxKey }
m30002| Thu Jun 14 01:29:10 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30002", min: { _id: 3.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30002| Thu Jun 14 01:29:10 [conn5] moveChunk setting version to: 5|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:10 [initandlisten] connection accepted from 127.0.0.1:39119 #19 (19 connections now open)
m30003| Thu Jun 14 01:29:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 3.0 } -> { _id: MaxKey }
m30003| Thu Jun 14 01:29:10 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:10-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651750494), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: MaxKey }, step1 of 5: 737, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 276 } }
m30002| Thu Jun 14 01:29:10 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30002", min: { _id: 3.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30002| Thu Jun 14 01:29:10 [conn5] moveChunk updating self version to: 5|1||4fd976a08c7a5fd108c1eeb1 through { _id: 2.0 } -> { _id: 3.0 } for collection 'foo.bar'
m30002| Thu Jun 14 01:29:10 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:10-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651750499), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0002", to: "shard0003" } }
m30002| Thu Jun 14 01:29:10 [conn5] doing delete inline
m30002| Thu Jun 14 01:29:10 [conn5] moveChunk deleted: 0
m30002| Thu Jun 14 01:29:10 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' unlocked.
m30002| Thu Jun 14 01:29:10 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:10-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651750500), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 16, step6 of 6: 0 } }
m30002| Thu Jun 14 01:29:10 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30002", to: "localhost:30003", fromShard: "shard0002", toShard: "shard0003", min: { _id: 3.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_3.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:61 w:37 reslen:37 1021ms
m30999| Thu Jun 14 01:29:10 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 10 version: 5|1||4fd976a08c7a5fd108c1eeb1 based on: 4|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:29:10 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0003:localhost:30003 lastmod: 5|0||000000000000000000000000 min: { _id: 3.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:10 [initandlisten] connection accepted from 127.0.0.1:39121 #20 (20 connections now open)
m30999| Thu Jun 14 01:29:10 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 11 version: 5|3||4fd976a08c7a5fd108c1eeb1 based on: 5|1||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30003| Thu Jun 14 01:29:10 [initandlisten] connection accepted from 127.0.0.1:57685 #5 (5 connections now open)
m30003| Thu Jun 14 01:29:10 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: 4.0 } ], shardId: "foo.bar-_id_3.0", configdb: "localhost:30000" }
m30003| Thu Jun 14 01:29:10 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30003| Thu Jun 14 01:29:10 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' acquired, ts : 4fd976a6ac11d87ec8873d5b
m30003| Thu Jun 14 01:29:10 [conn5] no current chunk manager found for this shard, will initialize
m30003| Thu Jun 14 01:29:10 [conn5] splitChunk accepted at version 5|0||4fd976a08c7a5fd108c1eeb1
m30003| Thu Jun 14 01:29:10 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:10-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651750507), what: "split", ns: "foo.bar", details: { before: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30003| Thu Jun 14 01:29:10 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' unlocked.
m30003| Thu Jun 14 01:29:10 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30003:1339651750:143292051 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:10 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 4.0 }, to: "shard0004" }
m30999| Thu Jun 14 01:29:10 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0003:localhost:30003 lastmod: 5|3||000000000000000000000000 min: { _id: 4.0 } max: { _id: MaxKey }) shard0003:localhost:30003 -> shard0004:localhost:30004
m30004| Thu Jun 14 01:29:10 [initandlisten] connection accepted from 127.0.0.1:52319 #4 (4 connections now open)
m30003| Thu Jun 14 01:29:10 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30003", to: "localhost:30004", fromShard: "shard0003", toShard: "shard0004", min: { _id: 4.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_4.0", configdb: "localhost:30000" }
m30003| Thu Jun 14 01:29:10 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30003| Thu Jun 14 01:29:10 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' acquired, ts : 4fd976a6ac11d87ec8873d5c
m30003| Thu Jun 14 01:29:10 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:10-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651750511), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0003", to: "shard0004" } }
m30003| Thu Jun 14 01:29:10 [conn5] moveChunk request accepted at version 5|3||4fd976a08c7a5fd108c1eeb1
m30003| Thu Jun 14 01:29:10 [conn5] moveChunk number of documents: 0
m30003| Thu Jun 14 01:29:10 [initandlisten] connection accepted from 127.0.0.1:57688 #6 (6 connections now open)
m30004| Thu Jun 14 01:29:10 [FileAllocator] allocating new datafile /data/db/test4/foo.ns, filling with zeroes...
m30004| Thu Jun 14 01:29:10 [FileAllocator] creating directory /data/db/test4/_tmp
m30003| Thu Jun 14 01:29:10 [FileAllocator] done allocating datafile /data/db/test3/foo.1, size: 32MB, took 0.693 secs
m30004| Thu Jun 14 01:29:11 [FileAllocator] done allocating datafile /data/db/test4/foo.ns, size: 16MB, took 0.296 secs
m30004| Thu Jun 14 01:29:11 [FileAllocator] allocating new datafile /data/db/test4/foo.0, filling with zeroes...
m30004| Thu Jun 14 01:29:11 [FileAllocator] done allocating datafile /data/db/test4/foo.0, size: 16MB, took 0.276 secs
m30004| Thu Jun 14 01:29:11 [migrateThread] build index foo.bar { _id: 1 }
m30004| Thu Jun 14 01:29:11 [migrateThread] build index done. scanned 0 total records. 0 secs
m30004| Thu Jun 14 01:29:11 [migrateThread] info: creating collection foo.bar on add index
m30004| Thu Jun 14 01:29:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 4.0 } -> { _id: MaxKey }
m30004| Thu Jun 14 01:29:11 [FileAllocator] allocating new datafile /data/db/test4/foo.1, filling with zeroes...
m30004| Thu Jun 14 01:29:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 4.0 } -> { _id: MaxKey }
m30004| Thu Jun 14 01:29:11 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:11-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651751528), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: MaxKey }, step1 of 5: 979, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 35 } }
m30000| Thu Jun 14 01:29:11 [initandlisten] connection accepted from 127.0.0.1:39124 #21 (21 connections now open)
{ "millis" : 1030, "ok" : 1 }
m30999| Thu Jun 14 01:29:11 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 12 version: 6|1||4fd976a08c7a5fd108c1eeb1 based on: 5|3||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:11 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0004:localhost:30004 lastmod: 6|0||000000000000000000000000 min: { _id: 4.0 } max: { _id: MaxKey }
m30004| Thu Jun 14 01:29:11 [initandlisten] connection accepted from 127.0.0.1:52322 #5 (5 connections now open)
m30003| Thu Jun 14 01:29:11 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30003", min: { _id: 4.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30003| Thu Jun 14 01:29:11 [conn5] moveChunk setting version to: 6|0||4fd976a08c7a5fd108c1eeb1
m30003| Thu Jun 14 01:29:11 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30003", min: { _id: 4.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30003| Thu Jun 14 01:29:11 [conn5] moveChunk updating self version to: 6|1||4fd976a08c7a5fd108c1eeb1 through { _id: 3.0 } -> { _id: 4.0 } for collection 'foo.bar'
m30003| Thu Jun 14 01:29:11 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:11-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651751538), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0003", to: "shard0004" } }
m30003| Thu Jun 14 01:29:11 [conn5] doing delete inline
m30003| Thu Jun 14 01:29:11 [conn5] moveChunk deleted: 0
m30003| Thu Jun 14 01:29:11 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' unlocked.
m30003| Thu Jun 14 01:29:11 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:11-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651751538), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1005, step5 of 6: 20, step6 of 6: 0 } }
m30003| Thu Jun 14 01:29:11 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30003", to: "localhost:30004", fromShard: "shard0003", toShard: "shard0004", min: { _id: 4.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_4.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:61 w:40 reslen:37 1028ms
m30000| Thu Jun 14 01:29:11 [initandlisten] connection accepted from 127.0.0.1:39126 #22 (22 connections now open)
m30004| Thu Jun 14 01:29:11 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0004", splitKeys: [ { _id: 5.0 } ], shardId: "foo.bar-_id_4.0", configdb: "localhost:30000" }
m30004| Thu Jun 14 01:29:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30004| Thu Jun 14 01:29:11 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' acquired, ts : 4fd976a75ad69a0083a1699b
m30004| Thu Jun 14 01:29:11 [conn5] no current chunk manager found for this shard, will initialize
m30004| Thu Jun 14 01:29:11 [conn5] splitChunk accepted at version 6|0||4fd976a08c7a5fd108c1eeb1
m30004| Thu Jun 14 01:29:11 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:11-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651751546), what: "split", ns: "foo.bar", details: { before: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30004| Thu Jun 14 01:29:11 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' unlocked.
{ "ok" : 1 }
m30004| Thu Jun 14 01:29:11 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30004:1339651751:1901482191 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:11 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 13 version: 6|3||4fd976a08c7a5fd108c1eeb1 based on: 6|1||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:11 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 5.0 }, to: "shard0005" }
m30999| Thu Jun 14 01:29:11 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0004:localhost:30004 lastmod: 6|3||000000000000000000000000 min: { _id: 5.0 } max: { _id: MaxKey }) shard0004:localhost:30004 -> shard0005:localhost:30005
m30004| Thu Jun 14 01:29:11 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30004", to: "localhost:30005", fromShard: "shard0004", toShard: "shard0005", min: { _id: 5.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_5.0", configdb: "localhost:30000" }
m30004| Thu Jun 14 01:29:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30004| Thu Jun 14 01:29:11 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' acquired, ts : 4fd976a75ad69a0083a1699c
m30004| Thu Jun 14 01:29:11 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:11-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651751550), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0004", to: "shard0005" } }
m30004| Thu Jun 14 01:29:11 [conn5] moveChunk request accepted at version 6|3||4fd976a08c7a5fd108c1eeb1
m30004| Thu Jun 14 01:29:11 [conn5] moveChunk number of documents: 0
m30005| Thu Jun 14 01:29:11 [initandlisten] connection accepted from 127.0.0.1:36114 #4 (4 connections now open)
m30004| Thu Jun 14 01:29:11 [initandlisten] connection accepted from 127.0.0.1:52325 #6 (6 connections now open)
m30005| Thu Jun 14 01:29:11 [FileAllocator] allocating new datafile /data/db/test5/foo.ns, filling with zeroes...
m30005| Thu Jun 14 01:29:11 [FileAllocator] creating directory /data/db/test5/_tmp
m30004| Thu Jun 14 01:29:12 [FileAllocator] done allocating datafile /data/db/test4/foo.1, size: 32MB, took 0.69 secs
m30005| Thu Jun 14 01:29:12 [FileAllocator] done allocating datafile /data/db/test5/foo.ns, size: 16MB, took 0.328 secs
m30005| Thu Jun 14 01:29:12 [FileAllocator] allocating new datafile /data/db/test5/foo.0, filling with zeroes...
m30004| Thu Jun 14 01:29:12 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30004", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30005| Thu Jun 14 01:29:12 [FileAllocator] done allocating datafile /data/db/test5/foo.0, size: 16MB, took 0.343 secs
m30005| Thu Jun 14 01:29:12 [migrateThread] build index foo.bar { _id: 1 }
m30005| Thu Jun 14 01:29:12 [FileAllocator] allocating new datafile /data/db/test5/foo.1, filling with zeroes...
m30005| Thu Jun 14 01:29:12 [migrateThread] build index done. scanned 0 total records. 0.019 secs
m30005| Thu Jun 14 01:29:12 [migrateThread] info: creating collection foo.bar on add index
m30005| Thu Jun 14 01:29:12 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 5.0 } -> { _id: MaxKey }
m30005| Thu Jun 14 01:29:13 [FileAllocator] done allocating datafile /data/db/test5/foo.1, size: 32MB, took 0.627 secs
m30004| Thu Jun 14 01:29:13 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30004", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30004| Thu Jun 14 01:29:13 [conn5] moveChunk setting version to: 7|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:13 [initandlisten] connection accepted from 127.0.0.1:39129 #23 (23 connections now open)
m30005| Thu Jun 14 01:29:13 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 5.0 } -> { _id: MaxKey }
m30005| Thu Jun 14 01:29:13 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:13-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651753607), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, step1 of 5: 1302, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 722 } }
m30004| Thu Jun 14 01:29:13 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30004", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30004| Thu Jun 14 01:29:13 [conn5] moveChunk updating self version to: 7|1||4fd976a08c7a5fd108c1eeb1 through { _id: 4.0 } -> { _id: 5.0 } for collection 'foo.bar'
m30004| Thu Jun 14 01:29:13 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:13-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651753611), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0004", to: "shard0005" } }
m30004| Thu Jun 14 01:29:13 [conn5] doing delete inline
m30004| Thu Jun 14 01:29:13 [conn5] moveChunk deleted: 0
m30004| Thu Jun 14 01:29:13 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' unlocked.
m30004| Thu Jun 14 01:29:13 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:13-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651753612), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 30, step4 of 6: 2021, step5 of 6: 8, step6 of 6: 0 } }
m30004| Thu Jun 14 01:29:13 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30004", to: "localhost:30005", fromShard: "shard0004", toShard: "shard0005", min: { _id: 5.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_5.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:95 w:48 reslen:37 2062ms
{ "millis" : 2064, "ok" : 1 }
m30999| Thu Jun 14 01:29:13 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 14 version: 7|1||4fd976a08c7a5fd108c1eeb1 based on: 6|3||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:13 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0005:localhost:30005 lastmod: 7|0||000000000000000000000000 min: { _id: 5.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:13 [initandlisten] connection accepted from 127.0.0.1:39131 #24 (24 connections now open)
m30005| Thu Jun 14 01:29:13 [initandlisten] connection accepted from 127.0.0.1:36117 #5 (5 connections now open)
m30005| Thu Jun 14 01:29:13 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0005", splitKeys: [ { _id: 6.0 } ], shardId: "foo.bar-_id_5.0", configdb: "localhost:30000" }
m30005| Thu Jun 14 01:29:13 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30005| Thu Jun 14 01:29:13 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' acquired, ts : 4fd976a9e8f51e72833aae1c
m30005| Thu Jun 14 01:29:13 [conn5] no current chunk manager found for this shard, will initialize
m30005| Thu Jun 14 01:29:13 [conn5] splitChunk accepted at version 7|0||4fd976a08c7a5fd108c1eeb1
m30005| Thu Jun 14 01:29:13 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:13-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651753619), what: "split", ns: "foo.bar", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30005| Thu Jun 14 01:29:13 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' unlocked.
m30005| Thu Jun 14 01:29:13 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30005:1339651753:29859155 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:13 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 15 version: 7|3||4fd976a08c7a5fd108c1eeb1 based on: 7|1||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:13 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 6.0 }, to: "shard0006" }
m30999| Thu Jun 14 01:29:13 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0005:localhost:30005 lastmod: 7|3||000000000000000000000000 min: { _id: 6.0 } max: { _id: MaxKey }) shard0005:localhost:30005 -> shard0006:localhost:30006
m30005| Thu Jun 14 01:29:13 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30005", to: "localhost:30006", fromShard: "shard0005", toShard: "shard0006", min: { _id: 6.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_6.0", configdb: "localhost:30000" }
m30005| Thu Jun 14 01:29:13 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30005| Thu Jun 14 01:29:13 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' acquired, ts : 4fd976a9e8f51e72833aae1d
m30005| Thu Jun 14 01:29:13 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:13-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651753624), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0005", to: "shard0006" } }
m30005| Thu Jun 14 01:29:13 [conn5] moveChunk request accepted at version 7|3||4fd976a08c7a5fd108c1eeb1
m30005| Thu Jun 14 01:29:13 [conn5] moveChunk number of documents: 0
m30006| Thu Jun 14 01:29:13 [initandlisten] connection accepted from 127.0.0.1:57822 #4 (4 connections now open)
m30005| Thu Jun 14 01:29:13 [initandlisten] connection accepted from 127.0.0.1:36120 #6 (6 connections now open)
m30006| Thu Jun 14 01:29:13 [FileAllocator] allocating new datafile /data/db/test6/foo.ns, filling with zeroes...
m30006| Thu Jun 14 01:29:13 [FileAllocator] creating directory /data/db/test6/_tmp
m30006| Thu Jun 14 01:29:13 [FileAllocator] done allocating datafile /data/db/test6/foo.ns, size: 16MB, took 0.326 secs
m30006| Thu Jun 14 01:29:13 [FileAllocator] allocating new datafile /data/db/test6/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39134 #25 (25 connections now open)
m30005| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:36122 #7 (7 connections now open)
m30006| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57826 #5 (5 connections now open)
m30007| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:56513 #4 (4 connections now open)
m30008| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:51954 #4 (4 connections now open)
m30009| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:46337 #4 (4 connections now open)
m30999| Thu Jun 14 01:29:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651744:1804289383' acquired, ts : 4fd976aa8c7a5fd108c1eeb2
m30001| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:59030 #7 (7 connections now open)
m30002| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:46660 #7 (7 connections now open)
m30003| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57707 #7 (7 connections now open)
m30004| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:52340 #7 (7 connections now open)
m30005| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:36131 #8 (8 connections now open)
m30006| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57835 #6 (6 connections now open)
m30007| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:56522 #5 (5 connections now open)
m30008| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:51963 #5 (5 connections now open)
m30009| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:46346 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39149 #26 (26 connections now open)
m30001| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:59040 #8 (8 connections now open)
m30002| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:46670 #8 (8 connections now open)
m30003| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57717 #8 (8 connections now open)
m30004| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:52350 #8 (8 connections now open)
m30005| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:36141 #9 (9 connections now open)
m30006| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57845 #7 (7 connections now open)
m30007| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:56532 #6 (6 connections now open)
m30008| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:51973 #6 (6 connections now open)
m30009| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:46356 #6 (6 connections now open)
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39159 #27 (27 connections now open)
m30006| Thu Jun 14 01:29:14 [FileAllocator] done allocating datafile /data/db/test6/foo.0, size: 16MB, took 0.318 secs
m30006| Thu Jun 14 01:29:14 [migrateThread] build index foo.bar { _id: 1 }
m30006| Thu Jun 14 01:29:14 [migrateThread] build index done. scanned 0 total records. 0 secs
m30006| Thu Jun 14 01:29:14 [migrateThread] info: creating collection foo.bar on add index
m30006| Thu Jun 14 01:29:14 [conn5] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:28 reslen:1753 148ms
m30006| Thu Jun 14 01:29:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 6.0 } -> { _id: MaxKey }
m30006| Thu Jun 14 01:29:14 [FileAllocator] allocating new datafile /data/db/test6/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:29:14 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30006", fromShard: "shard0001", toShard: "shard0006", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:14 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39160 #28 (28 connections now open)
m30001| Thu Jun 14 01:29:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651754285), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, note: "aborted" } }
m30999| Thu Jun 14 01:29:14 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:29:14 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0002 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0003 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0004 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0005 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0006 maxSize: 0 currSize: 16 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0007 maxSize: 0 currSize: 0 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0008 maxSize: 0 currSize: 0 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] shard0009 maxSize: 0 currSize: 0 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:29:14 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:29:14 [Balancer] shard0000
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0001
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0002
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0002" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0003
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: 4.0 }, shard: "shard0003" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0004
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_4.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 4.0 }, max: { _id: 5.0 }, shard: "shard0004" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0005
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_5.0", lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 5.0 }, max: { _id: 6.0 }, shard: "shard0005" }
m30999| Thu Jun 14 01:29:14 [Balancer] { _id: "foo.bar-_id_6.0", lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: 6.0 }, max: { _id: MaxKey }, shard: "shard0005" }
m30999| Thu Jun 14 01:29:14 [Balancer] shard0006
m30999| Thu Jun 14 01:29:14 [Balancer] shard0007
m30999| Thu Jun 14 01:29:14 [Balancer] shard0008
m30999| Thu Jun 14 01:29:14 [Balancer] shard0009
m30999| Thu Jun 14 01:29:14 [Balancer] ----
m30999| Thu Jun 14 01:29:14 [Balancer] chose [shard0001] to [shard0006] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:29:14 [Balancer] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0006:localhost:30006
m30999| Thu Jun 14 01:29:14 [Balancer] moveChunk result: { who: { _id: "foo.bar", process: "domU-12-31-39-01-70-B4:30005:1339651753:29859155", state: 2, ts: ObjectId('4fd976a9e8f51e72833aae1d'), when: new Date(1339651753623), who: "domU-12-31-39-01-70-B4:30005:1339651753:29859155:conn5:1943106549", why: "migrate-{ _id: 6.0 }" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }", ok: 0.0 }
m30999| Thu Jun 14 01:29:14 [Balancer] balancer move failed: { who: { _id: "foo.bar", process: "domU-12-31-39-01-70-B4:30005:1339651753:29859155", state: 2, ts: ObjectId('4fd976a9e8f51e72833aae1d'), when: new Date(1339651753623), who: "domU-12-31-39-01-70-B4:30005:1339651753:29859155:conn5:1943106549", why: "migrate-{ _id: 6.0 }" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }", ok: 0.0 } from: shard0001 to: shard0006 chunk: Assertion: 10331:EOO Before end of object
m30999| 0x84f514a 0x8126495 0x83f3537 0x8121f36 0x8488fac 0x82c589c 0x8128991 0x82c32b3 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x9d2542 0x2ceb6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x8121f36]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14BalancerPolicy9ChunkInfo8toStringEv+0x7c) [0x8488fac]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14LazyStringImplINS_14BalancerPolicy9ChunkInfoEE3valEv+0x2c) [0x82c589c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo9LogstreamlsERKNS_10LazyStringE+0x31) [0x8128991]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x853) [0x82c32b3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x9d2542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x2ceb6e]
m30999| Thu Jun 14 01:29:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651744:1804289383' unlocked.
m30999| Thu Jun 14 01:29:14 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:29:14 [Balancer] caught exception while doing balance: EOO Before end of object
m30000| Thu Jun 14 01:29:14 [conn5] end connection 127.0.0.1:39076 (27 connections now open)
m30005| Thu Jun 14 01:29:14 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30005", min: { _id: 6.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30005| Thu Jun 14 01:29:14 [conn5] moveChunk setting version to: 8|0||4fd976a08c7a5fd108c1eeb1
m30006| Thu Jun 14 01:29:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 6.0 } -> { _id: MaxKey }
m30006| Thu Jun 14 01:29:14 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651754635), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: MaxKey }, step1 of 5: 656, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 353 } }
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39161 #29 (28 connections now open)
m30005| Thu Jun 14 01:29:14 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30005", min: { _id: 6.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30005| Thu Jun 14 01:29:14 [conn5] moveChunk updating self version to: 8|1||4fd976a08c7a5fd108c1eeb1 through { _id: 5.0 } -> { _id: 6.0 } for collection 'foo.bar'
m30005| Thu Jun 14 01:29:14 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651754640), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0005", to: "shard0006" } }
m30005| Thu Jun 14 01:29:14 [conn5] doing delete inline
m30005| Thu Jun 14 01:29:14 [conn5] moveChunk deleted: 0
m30005| Thu Jun 14 01:29:14 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' unlocked.
m30005| Thu Jun 14 01:29:14 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651754640), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30005| Thu Jun 14 01:29:14 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30005", to: "localhost:30006", fromShard: "shard0005", toShard: "shard0006", min: { _id: 6.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_6.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:100 w:48 reslen:37 1017ms
m30999| Thu Jun 14 01:29:14 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 16 version: 8|1||4fd976a08c7a5fd108c1eeb1 based on: 7|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1019, "ok" : 1 }
m30999| Thu Jun 14 01:29:14 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0006:localhost:30006 lastmod: 8|0||000000000000000000000000 min: { _id: 6.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39162 #30 (29 connections now open)
m30006| Thu Jun 14 01:29:14 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0006", splitKeys: [ { _id: 7.0 } ], shardId: "foo.bar-_id_6.0", configdb: "localhost:30000" }
m30006| Thu Jun 14 01:29:14 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30006| Thu Jun 14 01:29:14 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' acquired, ts : 4fd976aa464a4077e5304bb6
m30006| Thu Jun 14 01:29:14 [conn5] no current chunk manager found for this shard, will initialize
m30006| Thu Jun 14 01:29:14 [conn5] splitChunk accepted at version 8|0||4fd976a08c7a5fd108c1eeb1
m30006| Thu Jun 14 01:29:14 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30006:1339651754:1225926596 (sleeping for 30000ms)
m30006| Thu Jun 14 01:29:14 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651754647), what: "split", ns: "foo.bar", details: { before: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 8000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 8000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 8000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30000| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:39163 #31 (30 connections now open)
m30006| Thu Jun 14 01:29:14 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' unlocked.
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:14 [conn] ChunkManager: time to load chunks for foo.bar: 3ms sequenceNumber: 17 version: 8|3||4fd976a08c7a5fd108c1eeb1 based on: 8|1||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:14 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 7.0 }, to: "shard0007" }
m30999| Thu Jun 14 01:29:14 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0006:localhost:30006 lastmod: 8|3||000000000000000000000000 min: { _id: 7.0 } max: { _id: MaxKey }) shard0006:localhost:30006 -> shard0007:localhost:30007
m30006| Thu Jun 14 01:29:14 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30006", to: "localhost:30007", fromShard: "shard0006", toShard: "shard0007", min: { _id: 7.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_7.0", configdb: "localhost:30000" }
m30006| Thu Jun 14 01:29:14 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30006| Thu Jun 14 01:29:14 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' acquired, ts : 4fd976aa464a4077e5304bb7
m30006| Thu Jun 14 01:29:14 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:14-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651754655), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0006", to: "shard0007" } }
m30006| Thu Jun 14 01:29:14 [conn5] moveChunk request accepted at version 8|3||4fd976a08c7a5fd108c1eeb1
m30006| Thu Jun 14 01:29:14 [conn5] moveChunk number of documents: 0
m30007| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:56540 #7 (7 connections now open)
m30006| Thu Jun 14 01:29:14 [initandlisten] connection accepted from 127.0.0.1:57855 #8 (8 connections now open)
m30007| Thu Jun 14 01:29:14 [FileAllocator] allocating new datafile /data/db/test7/foo.ns, filling with zeroes...
m30007| Thu Jun 14 01:29:14 [FileAllocator] creating directory /data/db/test7/_tmp
m30006| Thu Jun 14 01:29:14 [FileAllocator] done allocating datafile /data/db/test6/foo.1, size: 32MB, took 0.579 secs
m30007| Thu Jun 14 01:29:15 [FileAllocator] done allocating datafile /data/db/test7/foo.ns, size: 16MB, took 0.255 secs
m30007| Thu Jun 14 01:29:15 [FileAllocator] allocating new datafile /data/db/test7/foo.0, filling with zeroes...
m30007| Thu Jun 14 01:29:15 [FileAllocator] done allocating datafile /data/db/test7/foo.0, size: 16MB, took 0.248 secs
m30007| Thu Jun 14 01:29:15 [FileAllocator] allocating new datafile /data/db/test7/foo.1, filling with zeroes...
m30007| Thu Jun 14 01:29:15 [migrateThread] build index foo.bar { _id: 1 }
m30007| Thu Jun 14 01:29:15 [migrateThread] build index done. scanned 0 total records. 0 secs
m30007| Thu Jun 14 01:29:15 [migrateThread] info: creating collection foo.bar on add index
m30007| Thu Jun 14 01:29:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 7.0 } -> { _id: MaxKey }
m30006| Thu Jun 14 01:29:15 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30006", min: { _id: 7.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30006| Thu Jun 14 01:29:15 [conn5] moveChunk setting version to: 9|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:15 [initandlisten] connection accepted from 127.0.0.1:39166 #32 (31 connections now open)
m30007| Thu Jun 14 01:29:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 7.0 } -> { _id: MaxKey }
m30007| Thu Jun 14 01:29:15 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:15-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651755671), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: MaxKey }, step1 of 5: 719, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 294 } }
m30006| Thu Jun 14 01:29:15 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30006", min: { _id: 7.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30006| Thu Jun 14 01:29:15 [conn5] moveChunk updating self version to: 9|1||4fd976a08c7a5fd108c1eeb1 through { _id: 6.0 } -> { _id: 7.0 } for collection 'foo.bar'
m30006| Thu Jun 14 01:29:15 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:15-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651755675), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0006", to: "shard0007" } }
m30006| Thu Jun 14 01:29:15 [conn5] doing delete inline
m30006| Thu Jun 14 01:29:15 [conn5] moveChunk deleted: 0
m30006| Thu Jun 14 01:29:15 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' unlocked.
m30006| Thu Jun 14 01:29:15 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:15-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651755676), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 16, step6 of 6: 0 } }
m30006| Thu Jun 14 01:29:15 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30006", to: "localhost:30007", fromShard: "shard0006", toShard: "shard0007", min: { _id: 7.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_7.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:85 w:37 reslen:37 1021ms
m30999| Thu Jun 14 01:29:15 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 18 version: 9|1||4fd976a08c7a5fd108c1eeb1 based on: 8|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:29:15 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0007:localhost:30007 lastmod: 9|0||000000000000000000000000 min: { _id: 7.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:15 [initandlisten] connection accepted from 127.0.0.1:39167 #33 (32 connections now open)
m30007| Thu Jun 14 01:29:15 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0007", splitKeys: [ { _id: 8.0 } ], shardId: "foo.bar-_id_7.0", configdb: "localhost:30000" }
m30007| Thu Jun 14 01:29:15 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30007| Thu Jun 14 01:29:15 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' acquired, ts : 4fd976abb55095e29790364b
m30007| Thu Jun 14 01:29:15 [conn4] no current chunk manager found for this shard, will initialize
m30007| Thu Jun 14 01:29:15 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30007:1339651755:426846420 (sleeping for 30000ms)
m30007| Thu Jun 14 01:29:15 [conn4] splitChunk accepted at version 9|0||4fd976a08c7a5fd108c1eeb1
m30007| Thu Jun 14 01:29:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:15-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651755683), what: "split", ns: "foo.bar", details: { before: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 9000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 9000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 9000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30007| Thu Jun 14 01:29:15 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' unlocked.
{ "ok" : 1 }
m30000| Thu Jun 14 01:29:15 [initandlisten] connection accepted from 127.0.0.1:39168 #34 (33 connections now open)
m30999| Thu Jun 14 01:29:15 [conn] ChunkManager: time to load chunks for foo.bar: 3ms sequenceNumber: 19 version: 9|3||4fd976a08c7a5fd108c1eeb1 based on: 9|1||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:15 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 8.0 }, to: "shard0008" }
m30999| Thu Jun 14 01:29:15 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0007:localhost:30007 lastmod: 9|3||000000000000000000000000 min: { _id: 8.0 } max: { _id: MaxKey }) shard0007:localhost:30007 -> shard0008:localhost:30008
m30007| Thu Jun 14 01:29:15 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30007", to: "localhost:30008", fromShard: "shard0007", toShard: "shard0008", min: { _id: 8.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_8.0", configdb: "localhost:30000" }
m30007| Thu Jun 14 01:29:15 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30007| Thu Jun 14 01:29:15 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' acquired, ts : 4fd976abb55095e29790364c
m30007| Thu Jun 14 01:29:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:15-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651755691), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0007", to: "shard0008" } }
m30007| Thu Jun 14 01:29:15 [conn4] moveChunk request accepted at version 9|3||4fd976a08c7a5fd108c1eeb1
m30007| Thu Jun 14 01:29:15 [conn4] moveChunk number of documents: 0
m30008| Thu Jun 14 01:29:15 [initandlisten] connection accepted from 127.0.0.1:51985 #7 (7 connections now open)
m30007| Thu Jun 14 01:29:15 [initandlisten] connection accepted from 127.0.0.1:56546 #8 (8 connections now open)
m30008| Thu Jun 14 01:29:15 [FileAllocator] allocating new datafile /data/db/test8/foo.ns, filling with zeroes...
m30008| Thu Jun 14 01:29:15 [FileAllocator] creating directory /data/db/test8/_tmp
m30007| Thu Jun 14 01:29:15 [FileAllocator] done allocating datafile /data/db/test7/foo.1, size: 32MB, took 0.548 secs
m30008| Thu Jun 14 01:29:16 [FileAllocator] done allocating datafile /data/db/test8/foo.ns, size: 16MB, took 0.28 secs
m30008| Thu Jun 14 01:29:16 [FileAllocator] allocating new datafile /data/db/test8/foo.0, filling with zeroes...
m30008| Thu Jun 14 01:29:16 [FileAllocator] done allocating datafile /data/db/test8/foo.0, size: 16MB, took 0.335 secs
m30008| Thu Jun 14 01:29:16 [FileAllocator] allocating new datafile /data/db/test8/foo.1, filling with zeroes...
m30008| Thu Jun 14 01:29:16 [migrateThread] build index foo.bar { _id: 1 }
m30008| Thu Jun 14 01:29:16 [migrateThread] build index done. scanned 0 total records. 0 secs
m30008| Thu Jun 14 01:29:16 [migrateThread] info: creating collection foo.bar on add index
m30008| Thu Jun 14 01:29:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 8.0 } -> { _id: MaxKey }
m30007| Thu Jun 14 01:29:16 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30007", min: { _id: 8.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30007| Thu Jun 14 01:29:16 [conn4] moveChunk setting version to: 10|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:16 [initandlisten] connection accepted from 127.0.0.1:39171 #35 (34 connections now open)
m30008| Thu Jun 14 01:29:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 8.0 } -> { _id: MaxKey }
m30008| Thu Jun 14 01:29:16 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:16-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651756699), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: MaxKey }, step1 of 5: 860, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 145 } }
m30007| Thu Jun 14 01:29:16 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30007", min: { _id: 8.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30007| Thu Jun 14 01:29:16 [conn4] moveChunk updating self version to: 10|1||4fd976a08c7a5fd108c1eeb1 through { _id: 7.0 } -> { _id: 8.0 } for collection 'foo.bar'
m30007| Thu Jun 14 01:29:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:16-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651756703), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0007", to: "shard0008" } }
m30007| Thu Jun 14 01:29:16 [conn4] doing delete inline
m30007| Thu Jun 14 01:29:16 [conn4] moveChunk deleted: 0
m30007| Thu Jun 14 01:29:16 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' unlocked.
m30007| Thu Jun 14 01:29:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:16-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651756704), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1004, step5 of 6: 6, step6 of 6: 0 } }
m30007| Thu Jun 14 01:29:16 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30007", to: "localhost:30008", fromShard: "shard0007", toShard: "shard0008", min: { _id: 8.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_8.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:124 w:52 reslen:37 1014ms
{ "millis" : 1016, "ok" : 1 }
m30999| Thu Jun 14 01:29:16 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 20 version: 10|1||4fd976a08c7a5fd108c1eeb1 based on: 9|3||4fd976a08c7a5fd108c1eeb1
m30999| Thu Jun 14 01:29:16 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0008:localhost:30008 lastmod: 10|0||000000000000000000000000 min: { _id: 8.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:16 [initandlisten] connection accepted from 127.0.0.1:39172 #36 (35 connections now open)
m30008| Thu Jun 14 01:29:16 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0008", splitKeys: [ { _id: 9.0 } ], shardId: "foo.bar-_id_8.0", configdb: "localhost:30000" }
m30008| Thu Jun 14 01:29:16 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30008| Thu Jun 14 01:29:16 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30008:1339651756:1936674587 (sleeping for 30000ms)
m30008| Thu Jun 14 01:29:16 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' acquired, ts : 4fd976ac931f8c327385b248
m30008| Thu Jun 14 01:29:16 [conn4] no current chunk manager found for this shard, will initialize
m30008| Thu Jun 14 01:29:16 [conn4] splitChunk accepted at version 10|0||4fd976a08c7a5fd108c1eeb1
m30008| Thu Jun 14 01:29:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:16-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651756712), what: "split", ns: "foo.bar", details: { before: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 10000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 10000|2, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }, right: { min: { _id: 9.0 }, max: { _id: MaxKey }, lastmod: Timestamp 10000|3, lastmodEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') } } }
m30008| Thu Jun 14 01:29:16 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' unlocked.
m30999| Thu Jun 14 01:29:16 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 21 version: 10|3||4fd976a08c7a5fd108c1eeb1 based on: 10|1||4fd976a08c7a5fd108c1eeb1
{ "ok" : 1 }
m30999| Thu Jun 14 01:29:16 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 9.0 }, to: "shard0009" }
m30999| Thu Jun 14 01:29:16 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0008:localhost:30008 lastmod: 10|3||000000000000000000000000 min: { _id: 9.0 } max: { _id: MaxKey }) shard0008:localhost:30008 -> shard0009:localhost:30009
m30008| Thu Jun 14 01:29:16 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30008", to: "localhost:30009", fromShard: "shard0008", toShard: "shard0009", min: { _id: 9.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_9.0", configdb: "localhost:30000" }
m30008| Thu Jun 14 01:29:16 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30008| Thu Jun 14 01:29:16 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' acquired, ts : 4fd976ac931f8c327385b249
m30008| Thu Jun 14 01:29:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:16-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651756719), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0008", to: "shard0009" } }
m30008| Thu Jun 14 01:29:16 [conn4] moveChunk request accepted at version 10|3||4fd976a08c7a5fd108c1eeb1
m30008| Thu Jun 14 01:29:16 [conn4] moveChunk number of documents: 0
m30009| Thu Jun 14 01:29:16 [initandlisten] connection accepted from 127.0.0.1:46371 #7 (7 connections now open)
m30008| Thu Jun 14 01:29:16 [initandlisten] connection accepted from 127.0.0.1:51990 #8 (8 connections now open)
m30009| Thu Jun 14 01:29:16 [FileAllocator] allocating new datafile /data/db/test9/foo.ns, filling with zeroes...
m30009| Thu Jun 14 01:29:16 [FileAllocator] creating directory /data/db/test9/_tmp
m30008| Thu Jun 14 01:29:17 [FileAllocator] done allocating datafile /data/db/test8/foo.1, size: 32MB, took 0.576 secs
m30009| Thu Jun 14 01:29:17 [FileAllocator] done allocating datafile /data/db/test9/foo.ns, size: 16MB, took 0.288 secs
m30009| Thu Jun 14 01:29:17 [FileAllocator] allocating new datafile /data/db/test9/foo.0, filling with zeroes...
m30009| Thu Jun 14 01:29:17 [FileAllocator] done allocating datafile /data/db/test9/foo.0, size: 16MB, took 0.275 secs
m30009| Thu Jun 14 01:29:17 [FileAllocator] allocating new datafile /data/db/test9/foo.1, filling with zeroes...
m30009| Thu Jun 14 01:29:17 [migrateThread] build index foo.bar { _id: 1 }
m30009| Thu Jun 14 01:29:17 [migrateThread] build index done. scanned 0 total records. 0 secs
m30009| Thu Jun 14 01:29:17 [migrateThread] info: creating collection foo.bar on add index
m30009| Thu Jun 14 01:29:17 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 9.0 } -> { _id: MaxKey }
m30008| Thu Jun 14 01:29:17 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30008", min: { _id: 9.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30008| Thu Jun 14 01:29:17 [conn4] moveChunk setting version to: 11|0||4fd976a08c7a5fd108c1eeb1
m30009| Thu Jun 14 01:29:17 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 9.0 } -> { _id: MaxKey }
m30009| Thu Jun 14 01:29:17 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:17-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651757727), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, step1 of 5: 982, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 23 } }
m30008| Thu Jun 14 01:29:17 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30008", min: { _id: 9.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30008| Thu Jun 14 01:29:17 [conn4] moveChunk updating self version to: 11|1||4fd976a08c7a5fd108c1eeb1 through { _id: 8.0 } -> { _id: 9.0 } for collection 'foo.bar'
m30008| Thu Jun 14 01:29:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:17-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651757731), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0008", to: "shard0009" } }
m30008| Thu Jun 14 01:29:17 [conn4] doing delete inline
m30008| Thu Jun 14 01:29:17 [conn4] moveChunk deleted: 0
m30008| Thu Jun 14 01:29:17 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' unlocked.
m30008| Thu Jun 14 01:29:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:17-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651757732), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1005, step5 of 6: 6, step6 of 6: 0 } }
m30008| Thu Jun 14 01:29:17 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30008", to: "localhost:30009", fromShard: "shard0008", toShard: "shard0009", min: { _id: 9.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_9.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:100 w:49 reslen:37 1014ms
m30999| Thu Jun 14 01:29:17 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 22 version: 11|1||4fd976a08c7a5fd108c1eeb1 based on: 10|3||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1016, "ok" : 1 }
m30000| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:39175 #37 (36 connections now open)
m30000| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:39176 #38 (37 connections now open)
m30004| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:52377 #9 (9 connections now open)
m30005| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:36168 #10 (10 connections now open)
m30007| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:56559 #9 (9 connections now open)
m30008| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:52000 #9 (9 connections now open)
m30009| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:46383 #8 (8 connections now open)
m30009| Thu Jun 14 01:29:17 [conn8] no current chunk manager found for this shard, will initialize
0
0
m30999| Thu Jun 14 01:29:17 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:29:17 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 3|1||000000000000000000000000 min: { _id: 0.0 } max: { _id: 1.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:29:17 [conn25] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:29:17 [conn25] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:17 [conn25] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' acquired, ts : 4fd976aded7d681b17aa21b0
m30000| Thu Jun 14 01:29:17 [conn25] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:17-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39134", time: new Date(1339651757753), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:29:17 [conn25] moveChunk request accepted at version 3|1||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:17 [conn25] moveChunk number of documents: 0
m30006| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:57872 #9 (9 connections now open)
m30003| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:57744 #9 (9 connections now open)
m30002| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:46697 #9 (9 connections now open)
m30001| Thu Jun 14 01:29:17 [initandlisten] connection accepted from 127.0.0.1:59067 #9 (9 connections now open)
m30001| Thu Jun 14 01:29:17 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: 1.0 }
m30998| Thu Jun 14 01:29:17 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 11|1||4fd976a08c7a5fd108c1eeb1 based on: (empty)
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30003 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30004 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30005 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30006 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30007 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30008 serverID: 4fd976a0f3078a2c877d8226
m30998| Thu Jun 14 01:29:17 [conn] creating WriteBackListener for: localhost:30009 serverID: 4fd976a0f3078a2c877d8226
m30009| Thu Jun 14 01:29:18 [FileAllocator] done allocating datafile /data/db/test9/foo.1, size: 32MB, took 0.632 secs
m30000| Thu Jun 14 01:29:18 [conn25] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:29:18 [conn25] moveChunk setting version to: 12|0||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:18 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: 1.0 }
m30001| Thu Jun 14 01:29:18 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:18-10", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651758763), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30000| Thu Jun 14 01:29:18 [conn25] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:29:18 [conn25] moveChunk moved last chunk out for collection 'foo.bar'
m30000| Thu Jun 14 01:29:18 [conn25] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:18-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39134", time: new Date(1339651758767), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:29:18 [conn25] doing delete inline
m30000| Thu Jun 14 01:29:18 [conn25] moveChunk deleted: 0
m30000| Thu Jun 14 01:29:18 [conn25] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339651747:872239924' unlocked.
m30000| Thu Jun 14 01:29:18 [conn25] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:18-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:39134", time: new Date(1339651758768), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30000| Thu Jun 14 01:29:18 [conn25] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:10813 w:507 reslen:37 1016ms
m30999| Thu Jun 14 01:29:18 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 23 version: 12|0||4fd976a08c7a5fd108c1eeb1 based on: 11|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1017, "ok" : 1 }
m30999| Thu Jun 14 01:29:18 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 1.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:29:18 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 3|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: 2.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:29:18 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:18 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:18 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' acquired, ts : 4fd976ae5b3204a1e397720d
m30001| Thu Jun 14 01:29:18 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:18-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651758772), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:29:18 [conn4] moveChunk request accepted at version 12|0||4fd976a08c7a5fd108c1eeb1
m30001| Thu Jun 14 01:29:18 [conn4] moveChunk number of documents: 0
m30002| Thu Jun 14 01:29:18 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m30001| Thu Jun 14 01:29:19 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:29:19 [conn4] moveChunk setting version to: 13|0||4fd976a08c7a5fd108c1eeb1
m30002| Thu Jun 14 01:29:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m30002| Thu Jun 14 01:29:19 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:19-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651759783), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:29:19 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:29:19 [conn4] moveChunk updating self version to: 13|1||4fd976a08c7a5fd108c1eeb1 through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30001| Thu Jun 14 01:29:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:19-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651759787), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:29:19 [conn4] doing delete inline
m30001| Thu Jun 14 01:29:19 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:29:19 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651745:1408930224' unlocked.
m30001| Thu Jun 14 01:29:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:19-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:58997", time: new Date(1339651759788), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:29:19 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:256 w:152 reslen:37 1017ms
m30999| Thu Jun 14 01:29:19 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 24 version: 13|1||4fd976a08c7a5fd108c1eeb1 based on: 12|0||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1019, "ok" : 1 }
m30999| Thu Jun 14 01:29:19 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 2.0 }, to: "shard0003" }
m30999| Thu Jun 14 01:29:19 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0002:localhost:30002 lastmod: 5|1||000000000000000000000000 min: { _id: 2.0 } max: { _id: 3.0 }) shard0002:localhost:30002 -> shard0003:localhost:30003
m30002| Thu Jun 14 01:29:19 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30002", to: "localhost:30003", fromShard: "shard0002", toShard: "shard0003", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" }
m30002| Thu Jun 14 01:29:19 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:29:19 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' acquired, ts : 4fd976af074758c3ce75c4dd
m30002| Thu Jun 14 01:29:19 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:19-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651759793), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0003" } }
m30002| Thu Jun 14 01:29:19 [conn5] moveChunk request accepted at version 13|0||4fd976a08c7a5fd108c1eeb1
m30002| Thu Jun 14 01:29:19 [conn5] moveChunk number of documents: 0
m30003| Thu Jun 14 01:29:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 2.0 } -> { _id: 3.0 }
m30002| Thu Jun 14 01:29:20 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30002| Thu Jun 14 01:29:20 [conn5] moveChunk setting version to: 14|0||4fd976a08c7a5fd108c1eeb1
m30003| Thu Jun 14 01:29:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 2.0 } -> { _id: 3.0 }
m30003| Thu Jun 14 01:29:20 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:20-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651760803), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30002| Thu Jun 14 01:29:20 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30002| Thu Jun 14 01:29:20 [conn5] moveChunk updating self version to: 14|1||4fd976a08c7a5fd108c1eeb1 through { _id: 1.0 } -> { _id: 2.0 } for collection 'foo.bar'
m30002| Thu Jun 14 01:29:20 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:20-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651760807), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0003" } }
m30002| Thu Jun 14 01:29:20 [conn5] doing delete inline
m30002| Thu Jun 14 01:29:20 [conn5] moveChunk deleted: 0
m30002| Thu Jun 14 01:29:20 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30002:1339651749:891230109' unlocked.
m30002| Thu Jun 14 01:29:20 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:20-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46634", time: new Date(1339651760808), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 0 } }
m30002| Thu Jun 14 01:29:20 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30002", to: "localhost:30003", fromShard: "shard0002", toShard: "shard0003", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:141 w:66 reslen:37 1016ms
m30999| Thu Jun 14 01:29:20 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 25 version: 14|1||4fd976a08c7a5fd108c1eeb1 based on: 13|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1018, "ok" : 1 }
m30999| Thu Jun 14 01:29:20 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 3.0 }, to: "shard0004" }
m30999| Thu Jun 14 01:29:20 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0003:localhost:30003 lastmod: 6|1||000000000000000000000000 min: { _id: 3.0 } max: { _id: 4.0 }) shard0003:localhost:30003 -> shard0004:localhost:30004
m30003| Thu Jun 14 01:29:20 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30003", to: "localhost:30004", fromShard: "shard0003", toShard: "shard0004", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_3.0", configdb: "localhost:30000" }
m30003| Thu Jun 14 01:29:20 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30003| Thu Jun 14 01:29:20 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' acquired, ts : 4fd976b0ac11d87ec8873d5d
m30003| Thu Jun 14 01:29:20 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:20-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651760813), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0003", to: "shard0004" } }
m30003| Thu Jun 14 01:29:20 [conn5] moveChunk request accepted at version 14|0||4fd976a08c7a5fd108c1eeb1
m30003| Thu Jun 14 01:29:20 [conn5] moveChunk number of documents: 0
m30004| Thu Jun 14 01:29:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 3.0 } -> { _id: 4.0 }
m30003| Thu Jun 14 01:29:21 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30003", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30003| Thu Jun 14 01:29:21 [conn5] moveChunk setting version to: 15|0||4fd976a08c7a5fd108c1eeb1
m30004| Thu Jun 14 01:29:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 3.0 } -> { _id: 4.0 }
m30004| Thu Jun 14 01:29:21 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:21-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651761827), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1012 } }
m30003| Thu Jun 14 01:29:21 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30003", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30003| Thu Jun 14 01:29:21 [conn5] moveChunk updating self version to: 15|1||4fd976a08c7a5fd108c1eeb1 through { _id: 2.0 } -> { _id: 3.0 } for collection 'foo.bar'
m30003| Thu Jun 14 01:29:21 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:21-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651761831), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0003", to: "shard0004" } }
m30003| Thu Jun 14 01:29:21 [conn5] doing delete inline
m30003| Thu Jun 14 01:29:21 [conn5] moveChunk deleted: 0
m30003| Thu Jun 14 01:29:21 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30003:1339651750:143292051' unlocked.
m30003| Thu Jun 14 01:29:21 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:21-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57685", time: new Date(1339651761832), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 16, step6 of 6: 0 } }
m30003| Thu Jun 14 01:29:21 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30003", to: "localhost:30004", fromShard: "shard0003", toShard: "shard0004", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_3.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:177 w:75 reslen:37 1020ms
m30999| Thu Jun 14 01:29:21 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 26 version: 15|1||4fd976a08c7a5fd108c1eeb1 based on: 14|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1022, "ok" : 1 }
m30999| Thu Jun 14 01:29:21 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 4.0 }, to: "shard0005" }
m30999| Thu Jun 14 01:29:21 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0004:localhost:30004 lastmod: 7|1||000000000000000000000000 min: { _id: 4.0 } max: { _id: 5.0 }) shard0004:localhost:30004 -> shard0005:localhost:30005
m30004| Thu Jun 14 01:29:21 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30004", to: "localhost:30005", fromShard: "shard0004", toShard: "shard0005", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_4.0", configdb: "localhost:30000" }
m30004| Thu Jun 14 01:29:21 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30004| Thu Jun 14 01:29:21 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' acquired, ts : 4fd976b15ad69a0083a1699d
m30004| Thu Jun 14 01:29:21 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:21-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651761837), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0004", to: "shard0005" } }
m30004| Thu Jun 14 01:29:21 [conn5] moveChunk request accepted at version 15|0||4fd976a08c7a5fd108c1eeb1
m30004| Thu Jun 14 01:29:21 [conn5] moveChunk number of documents: 0
m30005| Thu Jun 14 01:29:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 4.0 } -> { _id: 5.0 }
m30004| Thu Jun 14 01:29:22 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30004", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30004| Thu Jun 14 01:29:22 [conn5] moveChunk setting version to: 16|0||4fd976a08c7a5fd108c1eeb1
m30005| Thu Jun 14 01:29:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 4.0 } -> { _id: 5.0 }
m30005| Thu Jun 14 01:29:22 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:22-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651762851), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1011 } }
m30004| Thu Jun 14 01:29:22 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30004", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30004| Thu Jun 14 01:29:22 [conn5] moveChunk updating self version to: 16|1||4fd976a08c7a5fd108c1eeb1 through { _id: 3.0 } -> { _id: 4.0 } for collection 'foo.bar'
m30004| Thu Jun 14 01:29:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:22-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651762856), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0004", to: "shard0005" } }
m30004| Thu Jun 14 01:29:22 [conn5] doing delete inline
m30004| Thu Jun 14 01:29:22 [conn5] moveChunk deleted: 0
m30004| Thu Jun 14 01:29:22 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30004:1339651751:1901482191' unlocked.
m30004| Thu Jun 14 01:29:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:22-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:52322", time: new Date(1339651762856), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 16, step6 of 6: 0 } }
m30004| Thu Jun 14 01:29:22 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30004", to: "localhost:30005", fromShard: "shard0004", toShard: "shard0005", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_4.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:187 w:88 reslen:37 1020ms
m30999| Thu Jun 14 01:29:22 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 27 version: 16|1||4fd976a08c7a5fd108c1eeb1 based on: 15|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1022, "ok" : 1 }
m30999| Thu Jun 14 01:29:22 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 5.0 }, to: "shard0006" }
m30999| Thu Jun 14 01:29:22 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0005:localhost:30005 lastmod: 8|1||000000000000000000000000 min: { _id: 5.0 } max: { _id: 6.0 }) shard0005:localhost:30005 -> shard0006:localhost:30006
m30005| Thu Jun 14 01:29:22 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30005", to: "localhost:30006", fromShard: "shard0005", toShard: "shard0006", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_5.0", configdb: "localhost:30000" }
m30005| Thu Jun 14 01:29:22 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30005| Thu Jun 14 01:29:22 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' acquired, ts : 4fd976b2e8f51e72833aae1e
m30005| Thu Jun 14 01:29:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:22-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651762861), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0005", to: "shard0006" } }
m30005| Thu Jun 14 01:29:22 [conn5] moveChunk request accepted at version 16|0||4fd976a08c7a5fd108c1eeb1
m30005| Thu Jun 14 01:29:22 [conn5] moveChunk number of documents: 0
m30006| Thu Jun 14 01:29:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 5.0 } -> { _id: 6.0 }
m30005| Thu Jun 14 01:29:23 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30005", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30005| Thu Jun 14 01:29:23 [conn5] moveChunk setting version to: 17|0||4fd976a08c7a5fd108c1eeb1
m30006| Thu Jun 14 01:29:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 5.0 } -> { _id: 6.0 }
m30006| Thu Jun 14 01:29:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:23-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651763875), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1012 } }
m30005| Thu Jun 14 01:29:23 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30005", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30005| Thu Jun 14 01:29:23 [conn5] moveChunk updating self version to: 17|1||4fd976a08c7a5fd108c1eeb1 through { _id: 4.0 } -> { _id: 5.0 } for collection 'foo.bar'
m30005| Thu Jun 14 01:29:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:23-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651763880), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0005", to: "shard0006" } }
m30005| Thu Jun 14 01:29:23 [conn5] doing delete inline
m30005| Thu Jun 14 01:29:23 [conn5] moveChunk deleted: 0
m30005| Thu Jun 14 01:29:23 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30005:1339651753:29859155' unlocked.
m30005| Thu Jun 14 01:29:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:23-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:36117", time: new Date(1339651763880), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 16, step6 of 6: 0 } }
m30005| Thu Jun 14 01:29:23 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30005", to: "localhost:30006", fromShard: "shard0005", toShard: "shard0006", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_5.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:158 w:79 reslen:37 1020ms
m30999| Thu Jun 14 01:29:23 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 28 version: 17|1||4fd976a08c7a5fd108c1eeb1 based on: 16|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1022, "ok" : 1 }
m30999| Thu Jun 14 01:29:23 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 6.0 }, to: "shard0007" }
m30999| Thu Jun 14 01:29:23 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0006:localhost:30006 lastmod: 9|1||000000000000000000000000 min: { _id: 6.0 } max: { _id: 7.0 }) shard0006:localhost:30006 -> shard0007:localhost:30007
m30006| Thu Jun 14 01:29:23 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30006", to: "localhost:30007", fromShard: "shard0006", toShard: "shard0007", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_6.0", configdb: "localhost:30000" }
m30006| Thu Jun 14 01:29:23 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30006| Thu Jun 14 01:29:23 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' acquired, ts : 4fd976b3464a4077e5304bb8
m30006| Thu Jun 14 01:29:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:23-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651763885), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0006", to: "shard0007" } }
m30006| Thu Jun 14 01:29:23 [conn5] moveChunk request accepted at version 17|0||4fd976a08c7a5fd108c1eeb1
m30006| Thu Jun 14 01:29:23 [conn5] moveChunk number of documents: 0
m30007| Thu Jun 14 01:29:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 6.0 } -> { _id: 7.0 }
m30006| Thu Jun 14 01:29:24 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30006", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30006| Thu Jun 14 01:29:24 [conn5] moveChunk setting version to: 18|0||4fd976a08c7a5fd108c1eeb1
m30007| Thu Jun 14 01:29:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 6.0 } -> { _id: 7.0 }
m30007| Thu Jun 14 01:29:24 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:24-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651764895), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30006| Thu Jun 14 01:29:24 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30006", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30006| Thu Jun 14 01:29:24 [conn5] moveChunk updating self version to: 18|1||4fd976a08c7a5fd108c1eeb1 through { _id: 5.0 } -> { _id: 6.0 } for collection 'foo.bar'
m30006| Thu Jun 14 01:29:24 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:24-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651764896), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0006", to: "shard0007" } }
m30006| Thu Jun 14 01:29:24 [conn5] doing delete inline
m30006| Thu Jun 14 01:29:24 [conn5] moveChunk deleted: 0
m30006| Thu Jun 14 01:29:24 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30006:1339651754:1225926596' unlocked.
m30006| Thu Jun 14 01:29:24 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:24-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:57826", time: new Date(1339651764896), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 8, step6 of 6: 0 } }
m30006| Thu Jun 14 01:29:24 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30006", to: "localhost:30007", fromShard: "shard0006", toShard: "shard0007", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_6.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:159 w:66 reslen:37 1012ms
m30999| Thu Jun 14 01:29:24 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 29 version: 18|1||4fd976a08c7a5fd108c1eeb1 based on: 17|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1014, "ok" : 1 }
m30999| Thu Jun 14 01:29:24 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 7.0 }, to: "shard0008" }
m30999| Thu Jun 14 01:29:24 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0007:localhost:30007 lastmod: 10|1||000000000000000000000000 min: { _id: 7.0 } max: { _id: 8.0 }) shard0007:localhost:30007 -> shard0008:localhost:30008
m30007| Thu Jun 14 01:29:24 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30007", to: "localhost:30008", fromShard: "shard0007", toShard: "shard0008", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_7.0", configdb: "localhost:30000" }
m30007| Thu Jun 14 01:29:24 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30007| Thu Jun 14 01:29:24 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' acquired, ts : 4fd976b4b55095e29790364d
m30007| Thu Jun 14 01:29:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:24-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651764901), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0007", to: "shard0008" } }
m30007| Thu Jun 14 01:29:24 [conn4] moveChunk request accepted at version 18|0||4fd976a08c7a5fd108c1eeb1
m30007| Thu Jun 14 01:29:24 [conn4] moveChunk number of documents: 0
m30008| Thu Jun 14 01:29:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 7.0 } -> { _id: 8.0 }
m30007| Thu Jun 14 01:29:25 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30007", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30007| Thu Jun 14 01:29:25 [conn4] moveChunk setting version to: 19|0||4fd976a08c7a5fd108c1eeb1
m30008| Thu Jun 14 01:29:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 7.0 } -> { _id: 8.0 }
m30008| Thu Jun 14 01:29:25 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:25-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651765911), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30007| Thu Jun 14 01:29:25 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30007", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30007| Thu Jun 14 01:29:25 [conn4] moveChunk updating self version to: 19|1||4fd976a08c7a5fd108c1eeb1 through { _id: 6.0 } -> { _id: 7.0 } for collection 'foo.bar'
m30007| Thu Jun 14 01:29:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:25-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651765912), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0007", to: "shard0008" } }
m30007| Thu Jun 14 01:29:25 [conn4] doing delete inline
m30007| Thu Jun 14 01:29:25 [conn4] moveChunk deleted: 0
m30007| Thu Jun 14 01:29:25 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30007:1339651755:426846420' unlocked.
m30007| Thu Jun 14 01:29:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:25-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56513", time: new Date(1339651765913), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 9, step6 of 6: 0 } }
m30007| Thu Jun 14 01:29:25 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30007", to: "localhost:30008", fromShard: "shard0007", toShard: "shard0008", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_7.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:191 w:77 reslen:37 1012ms
m30999| Thu Jun 14 01:29:25 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 30 version: 19|1||4fd976a08c7a5fd108c1eeb1 based on: 18|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1014, "ok" : 1 }
m30999| Thu Jun 14 01:29:25 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 8.0 }, to: "shard0009" }
m30999| Thu Jun 14 01:29:25 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0008:localhost:30008 lastmod: 11|1||000000000000000000000000 min: { _id: 8.0 } max: { _id: 9.0 }) shard0008:localhost:30008 -> shard0009:localhost:30009
m30008| Thu Jun 14 01:29:25 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30008", to: "localhost:30009", fromShard: "shard0008", toShard: "shard0009", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_8.0", configdb: "localhost:30000" }
m30008| Thu Jun 14 01:29:25 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30008| Thu Jun 14 01:29:25 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' acquired, ts : 4fd976b5931f8c327385b24a
m30008| Thu Jun 14 01:29:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:25-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651765918), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0008", to: "shard0009" } }
m30008| Thu Jun 14 01:29:25 [conn4] moveChunk request accepted at version 19|0||4fd976a08c7a5fd108c1eeb1
m30008| Thu Jun 14 01:29:25 [conn4] moveChunk number of documents: 0
m30009| Thu Jun 14 01:29:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 8.0 } -> { _id: 9.0 }
m30008| Thu Jun 14 01:29:26 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30008", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30008| Thu Jun 14 01:29:26 [conn4] moveChunk setting version to: 20|0||4fd976a08c7a5fd108c1eeb1
m30009| Thu Jun 14 01:29:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 8.0 } -> { _id: 9.0 }
m30009| Thu Jun 14 01:29:26 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:26-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651766927), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1007 } }
m30008| Thu Jun 14 01:29:26 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30008", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30008| Thu Jun 14 01:29:26 [conn4] moveChunk updating self version to: 20|1||4fd976a08c7a5fd108c1eeb1 through { _id: 7.0 } -> { _id: 8.0 } for collection 'foo.bar'
m30008| Thu Jun 14 01:29:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:26-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651766928), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0008", to: "shard0009" } }
m30008| Thu Jun 14 01:29:26 [conn4] doing delete inline
m30008| Thu Jun 14 01:29:26 [conn4] moveChunk deleted: 0
m30008| Thu Jun 14 01:29:26 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30008:1339651756:1936674587' unlocked.
m30008| Thu Jun 14 01:29:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:26-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51954", time: new Date(1339651766929), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 5, step6 of 6: 0 } }
m30008| Thu Jun 14 01:29:26 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30008", to: "localhost:30009", fromShard: "shard0008", toShard: "shard0009", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_8.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:172 w:83 reslen:37 1012ms
m30999| Thu Jun 14 01:29:26 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 31 version: 20|1||4fd976a08c7a5fd108c1eeb1 based on: 19|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1014, "ok" : 1 }
m30999| Thu Jun 14 01:29:26 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 9.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:29:26 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0009:localhost:30009 lastmod: 11|0||000000000000000000000000 min: { _id: 9.0 } max: { _id: MaxKey }) shard0009:localhost:30009 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:29:26 [initandlisten] connection accepted from 127.0.0.1:39186 #39 (38 connections now open)
m30009| Thu Jun 14 01:29:26 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30009", to: "localhost:30000", fromShard: "shard0009", toShard: "shard0000", min: { _id: 9.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_9.0", configdb: "localhost:30000" }
m30009| Thu Jun 14 01:29:26 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30009| Thu Jun 14 01:29:26 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30009:1339651766:669448339 (sleeping for 30000ms)
m30009| Thu Jun 14 01:29:26 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30009:1339651766:669448339' acquired, ts : 4fd976b6a672415c9c2680aa
m30009| Thu Jun 14 01:29:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:26-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46337", time: new Date(1339651766936), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0009", to: "shard0000" } }
m30009| Thu Jun 14 01:29:26 [conn4] moveChunk request accepted at version 20|0||4fd976a08c7a5fd108c1eeb1
m30009| Thu Jun 14 01:29:26 [conn4] moveChunk number of documents: 0
m30009| Thu Jun 14 01:29:26 [initandlisten] connection accepted from 127.0.0.1:46385 #9 (9 connections now open)
m30000| Thu Jun 14 01:29:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 9.0 } -> { _id: MaxKey }
m30009| Thu Jun 14 01:29:27 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30009", min: { _id: 9.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30009| Thu Jun 14 01:29:27 [conn4] moveChunk setting version to: 21|0||4fd976a08c7a5fd108c1eeb1
m30000| Thu Jun 14 01:29:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 9.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:29:27 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:27-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651767947), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30009| Thu Jun 14 01:29:27 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30009", min: { _id: 9.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30009| Thu Jun 14 01:29:27 [conn4] moveChunk updating self version to: 21|1||4fd976a08c7a5fd108c1eeb1 through { _id: 8.0 } -> { _id: 9.0 } for collection 'foo.bar'
m30009| Thu Jun 14 01:29:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:27-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46337", time: new Date(1339651767952), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0009", to: "shard0000" } }
m30009| Thu Jun 14 01:29:27 [conn4] doing delete inline
m30009| Thu Jun 14 01:29:27 [conn4] moveChunk deleted: 0
m30009| Thu Jun 14 01:29:27 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30009:1339651766:669448339' unlocked.
m30009| Thu Jun 14 01:29:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:27-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:46337", time: new Date(1339651767952), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 9.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30009| Thu Jun 14 01:29:27 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30009", to: "localhost:30000", fromShard: "shard0009", toShard: "shard0000", min: { _id: 9.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_9.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:93 w:39 reslen:37 1019ms
m30999| Thu Jun 14 01:29:27 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 32 version: 21|1||4fd976a08c7a5fd108c1eeb1 based on: 20|1||4fd976a08c7a5fd108c1eeb1
{ "millis" : 1022, "ok" : 1 }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
{ "_id" : "shard0003", "host" : "localhost:30003" }
{ "_id" : "shard0004", "host" : "localhost:30004" }
{ "_id" : "shard0005", "host" : "localhost:30005" }
{ "_id" : "shard0006", "host" : "localhost:30006" }
{ "_id" : "shard0007", "host" : "localhost:30007" }
{ "_id" : "shard0008", "host" : "localhost:30008" }
{ "_id" : "shard0009", "host" : "localhost:30009" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }
foo.bar chunks:
shard0001 2
shard0002 1
shard0003 1
shard0004 1
shard0005 1
shard0006 1
shard0007 1
shard0008 1
shard0009 1
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0001 Timestamp(13000, 1)
{ "_id" : 0 } -->> { "_id" : 1 } on : shard0001 Timestamp(12000, 0)
{ "_id" : 1 } -->> { "_id" : 2 } on : shard0002 Timestamp(14000, 1)
{ "_id" : 2 } -->> { "_id" : 3 } on : shard0003 Timestamp(15000, 1)
{ "_id" : 3 } -->> { "_id" : 4 } on : shard0004 Timestamp(16000, 1)
{ "_id" : 4 } -->> { "_id" : 5 } on : shard0005 Timestamp(17000, 1)
{ "_id" : 5 } -->> { "_id" : 6 } on : shard0006 Timestamp(18000, 1)
{ "_id" : 6 } -->> { "_id" : 7 } on : shard0007 Timestamp(19000, 1)
{ "_id" : 7 } -->> { "_id" : 8 } on : shard0008 Timestamp(20000, 1)
{ "_id" : 8 } -->> { "_id" : 9 } on : shard0009 Timestamp(21000, 1)
{ "_id" : 9 } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(21000, 0)
----
Running count!
----
m30000| Thu Jun 14 01:29:28 [conn38] assertion 13388 [foo.bar] shard version not ok in Client::Context: this shard no longer contains chunks for foo.bar, the collection may have been dropped ( ns : foo.bar, received : 3|1||4fd976a08c7a5fd108c1eeb1, wanted : 0|0||000000000000000000000000, send ) ( ns : foo.bar, received : 3|1||4fd976a08c7a5fd108c1eeb1, wanted : 0|0||000000000000000000000000, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30000| Thu Jun 14 01:29:28 [conn38] ntoskip:0 ntoreturn:1
m30000| Thu Jun 14 01:29:28 [conn38] { $err: "[foo.bar] shard version not ok in Client::Context: this shard no longer contains chunks for foo.bar, the collection may have been dropped ( ns : foo.b...", code: 13388, ns: "foo.bar", vReceived: Timestamp 3000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 0|0, vWantedEpoch: ObjectId('000000000000000000000000') }
m30002| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 14 does not match received 5 ( ns : foo.bar, received : 5|1||4fd976a08c7a5fd108c1eeb1, wanted : 14|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 5|1||4fd976a08c7a5fd108c1eeb1, wanted : 14|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30002| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30002| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 14 does not match received 5 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 5000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 14000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30001| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 13 does not match received 4 ( ns : foo.bar, received : 4|1||4fd976a08c7a5fd108c1eeb1, wanted : 13|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 4|1||4fd976a08c7a5fd108c1eeb1, wanted : 13|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30001| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30001| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 13 does not match received 4 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 4000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 13000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30003| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 15 does not match received 6 ( ns : foo.bar, received : 6|1||4fd976a08c7a5fd108c1eeb1, wanted : 15|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 6|1||4fd976a08c7a5fd108c1eeb1, wanted : 15|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30003| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30003| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 15 does not match received 6 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 6000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 15000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30004| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 16 does not match received 7 ( ns : foo.bar, received : 7|1||4fd976a08c7a5fd108c1eeb1, wanted : 16|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 7|1||4fd976a08c7a5fd108c1eeb1, wanted : 16|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30004| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30004| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 16 does not match received 7 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 7000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 16000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30005| Thu Jun 14 01:29:28 [conn10] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 17 does not match received 8 ( ns : foo.bar, received : 8|1||4fd976a08c7a5fd108c1eeb1, wanted : 17|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 8|1||4fd976a08c7a5fd108c1eeb1, wanted : 17|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30005| Thu Jun 14 01:29:28 [conn10] ntoskip:0 ntoreturn:1
m30005| Thu Jun 14 01:29:28 [conn10] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 17 does not match received 8 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 8000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 17000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30006| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 18 does not match received 9 ( ns : foo.bar, received : 9|1||4fd976a08c7a5fd108c1eeb1, wanted : 18|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 9|1||4fd976a08c7a5fd108c1eeb1, wanted : 18|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30006| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30006| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 18 does not match received 9 ( ns : foo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 9000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 18000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30007| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 19 does not match received 10 ( ns : foo.bar, received : 10|1||4fd976a08c7a5fd108c1eeb1, wanted : 19|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 10|1||4fd976a08c7a5fd108c1eeb1, wanted : 19|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30007| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30007| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 19 does not match received 10 ( ns : fo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 10000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 19000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30008| Thu Jun 14 01:29:28 [conn9] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 20 does not match received 11 ( ns : foo.bar, received : 11|1||4fd976a08c7a5fd108c1eeb1, wanted : 20|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 11|1||4fd976a08c7a5fd108c1eeb1, wanted : 20|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30008| Thu Jun 14 01:29:28 [conn9] ntoskip:0 ntoreturn:1
m30008| Thu Jun 14 01:29:28 [conn9] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 20 does not match received 11 ( ns : fo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 11000|1, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 20000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30009| Thu Jun 14 01:29:28 [conn8] assertion 13388 [foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 21 does not match received 11 ( ns : foo.bar, received : 11|0||4fd976a08c7a5fd108c1eeb1, wanted : 21|0||4fd976a08c7a5fd108c1eeb1, send ) ( ns : foo.bar, received : 11|0||4fd976a08c7a5fd108c1eeb1, wanted : 21|0||4fd976a08c7a5fd108c1eeb1, send ) ns:foo.$cmd query:{ count: "bar", query: {} }
m30009| Thu Jun 14 01:29:28 [conn8] ntoskip:0 ntoreturn:1
m30009| Thu Jun 14 01:29:28 [conn8] { $err: "[foo.bar] shard version not ok in Client::Context: version mismatch detected for foo.bar, stored major version 21 does not match received 11 ( ns : fo...", code: 13388, ns: "foo.bar", vReceived: Timestamp 11000|0, vReceivedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1'), vWanted: Timestamp 21000|0, vWantedEpoch: ObjectId('4fd976a08c7a5fd108c1eeb1') }
m30998| Thu Jun 14 01:29:28 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 3 version: 21|1||4fd976a08c7a5fd108c1eeb1 based on: 11|1||4fd976a08c7a5fd108c1eeb1
0
[ ]
m30999| Thu Jun 14 01:29:28 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:39072 (37 connections now open)
m30000| Thu Jun 14 01:29:28 [conn12] end connection 127.0.0.1:39096 (36 connections now open)
m30001| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:58987 (8 connections now open)
m30001| Thu Jun 14 01:29:28 [conn4] end connection 127.0.0.1:58997 (7 connections now open)
m30003| Thu Jun 14 01:29:28 [conn5] end connection 127.0.0.1:57685 (8 connections now open)
m30003| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:57665 (7 connections now open)
m30002| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:46618 (8 connections now open)
m30002| Thu Jun 14 01:29:28 [conn5] end connection 127.0.0.1:46634 (7 connections now open)
m30004| Thu Jun 14 01:29:28 [conn5] end connection 127.0.0.1:52322 (8 connections now open)
m30004| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:52298 (7 connections now open)
m30005| Thu Jun 14 01:29:28 [conn7] end connection 127.0.0.1:36122 (9 connections now open)
m30005| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:36089 (8 connections now open)
m30005| Thu Jun 14 01:29:28 [conn5] end connection 127.0.0.1:36117 (7 connections now open)
m30007| Thu Jun 14 01:29:28 [conn4] end connection 127.0.0.1:56513 (8 connections now open)
m30007| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:56480 (7 connections now open)
m30006| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:57793 (8 connections now open)
m30006| Thu Jun 14 01:29:28 [conn5] end connection 127.0.0.1:57826 (7 connections now open)
m30008| Thu Jun 14 01:29:28 [conn4] end connection 127.0.0.1:51954 (8 connections now open)
m30008| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:51921 (7 connections now open)
m30009| Thu Jun 14 01:29:28 [conn4] end connection 127.0.0.1:46337 (8 connections now open)
m30009| Thu Jun 14 01:29:28 [conn3] end connection 127.0.0.1:46304 (7 connections now open)
m30000| Thu Jun 14 01:29:28 [conn25] end connection 127.0.0.1:39134 (35 connections now open)
Thu Jun 14 01:29:29 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:29:29 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:29:29 [conn6] end connection 127.0.0.1:39078 (34 connections now open)
m30000| Thu Jun 14 01:29:29 [conn26] end connection 127.0.0.1:39149 (33 connections now open)
m30000| Thu Jun 14 01:29:29 [conn8] end connection 127.0.0.1:39080 (32 connections now open)
m30000| Thu Jun 14 01:29:29 [conn38] end connection 127.0.0.1:39176 (32 connections now open)
m30002| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:46697 (6 connections now open)
m30001| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:59067 (6 connections now open)
m30003| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:57744 (6 connections now open)
m30004| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:52377 (6 connections now open)
m30005| Thu Jun 14 01:29:29 [conn10] end connection 127.0.0.1:36168 (6 connections now open)
m30006| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:57872 (6 connections now open)
m30007| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:56559 (6 connections now open)
m30008| Thu Jun 14 01:29:29 [conn9] end connection 127.0.0.1:52000 (6 connections now open)
m30009| Thu Jun 14 01:29:29 [conn8] end connection 127.0.0.1:46383 (6 connections now open)
Thu Jun 14 01:29:30 shell: stopped mongo program on port 30998
m30997| Thu Jun 14 01:29:30 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:29:30 [conn10] end connection 127.0.0.1:39084 (30 connections now open)
m30000| Thu Jun 14 01:29:30 [conn11] end connection 127.0.0.1:39085 (29 connections now open)
m30000| Thu Jun 14 01:29:30 [conn27] end connection 127.0.0.1:39159 (28 connections now open)
m30002| Thu Jun 14 01:29:30 [conn8] end connection 127.0.0.1:46670 (5 connections now open)
m30001| Thu Jun 14 01:29:30 [conn8] end connection 127.0.0.1:59040 (5 connections now open)
m30005| Thu Jun 14 01:29:30 [conn9] end connection 127.0.0.1:36141 (5 connections now open)
m30003| Thu Jun 14 01:29:30 [conn8] end connection 127.0.0.1:57717 (5 connections now open)
m30008| Thu Jun 14 01:29:30 [conn6] end connection 127.0.0.1:51973 (5 connections now open)
m30004| Thu Jun 14 01:29:30 [conn8] end connection 127.0.0.1:52350 (5 connections now open)
m30006| Thu Jun 14 01:29:30 [conn7] end connection 127.0.0.1:57845 (5 connections now open)
m30009| Thu Jun 14 01:29:30 [conn6] end connection 127.0.0.1:46356 (5 connections now open)
m30007| Thu Jun 14 01:29:30 [conn6] end connection 127.0.0.1:56532 (5 connections now open)
m30000| Thu Jun 14 01:29:30 [conn9] end connection 127.0.0.1:39082 (27 connections now open)
Thu Jun 14 01:29:31 shell: stopped mongo program on port 30997
m30000| Thu Jun 14 01:29:31 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:29:31 [interruptThread] now exiting
m30000| Thu Jun 14 01:29:31 dbexit:
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:29:31 [interruptThread] closing listening socket: 20
m30000| Thu Jun 14 01:29:31 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:29:31 [interruptThread] closing listening socket: 22
m30000| Thu Jun 14 01:29:31 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:29:31 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:29:31 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:29:31 dbexit: really exiting now
m30001| Thu Jun 14 01:29:31 [conn5] end connection 127.0.0.1:58999 (4 connections now open)
m30009| Thu Jun 14 01:29:31 [conn9] end connection 127.0.0.1:46385 (4 connections now open)
Thu Jun 14 01:29:32 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:29:32 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:29:32 [interruptThread] now exiting
m30001| Thu Jun 14 01:29:32 dbexit:
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:29:32 [interruptThread] closing listening socket: 23
m30001| Thu Jun 14 01:29:32 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:29:32 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:29:32 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:29:32 [conn4] end connection 127.0.0.1:46631 (4 connections now open)
m30001| Thu Jun 14 01:29:32 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:29:32 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:29:32 dbexit: really exiting now
Thu Jun 14 01:29:33 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:29:33 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:29:33 [interruptThread] now exiting
m30002| Thu Jun 14 01:29:33 dbexit:
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:29:33 [interruptThread] closing listening socket: 26
m30002| Thu Jun 14 01:29:33 [interruptThread] closing listening socket: 27
m30002| Thu Jun 14 01:29:33 [interruptThread] closing listening socket: 28
m30002| Thu Jun 14 01:29:33 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:29:33 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:29:33 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:29:33 dbexit: really exiting now
m30003| Thu Jun 14 01:29:33 [conn4] end connection 127.0.0.1:57682 (4 connections now open)
Thu Jun 14 01:29:34 shell: stopped mongo program on port 30002
m30003| Thu Jun 14 01:29:34 got signal 15 (Terminated), will terminate after current cmd ends
m30003| Thu Jun 14 01:29:34 [interruptThread] now exiting
m30003| Thu Jun 14 01:29:34 dbexit:
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: going to close listening sockets...
m30003| Thu Jun 14 01:29:34 [interruptThread] closing listening socket: 29
m30003| Thu Jun 14 01:29:34 [interruptThread] closing listening socket: 30
m30003| Thu Jun 14 01:29:34 [interruptThread] closing listening socket: 31
m30003| Thu Jun 14 01:29:34 [interruptThread] removing socket file: /tmp/mongodb-30003.sock
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: going to flush diaglog...
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: going to close sockets...
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: waiting for fs preallocator...
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: closing all files...
m30003| Thu Jun 14 01:29:34 [interruptThread] closeAllFiles() finished
m30003| Thu Jun 14 01:29:34 [interruptThread] shutdown: removing fs lock...
m30003| Thu Jun 14 01:29:34 dbexit: really exiting now
m30004| Thu Jun 14 01:29:34 [conn4] end connection 127.0.0.1:52319 (4 connections now open)
Thu Jun 14 01:29:35 shell: stopped mongo program on port 30003
m30004| Thu Jun 14 01:29:35 got signal 15 (Terminated), will terminate after current cmd ends
m30004| Thu Jun 14 01:29:35 [interruptThread] now exiting
m30004| Thu Jun 14 01:29:35 dbexit:
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: going to close listening sockets...
m30004| Thu Jun 14 01:29:35 [interruptThread] closing listening socket: 32
m30004| Thu Jun 14 01:29:35 [interruptThread] closing listening socket: 33
m30004| Thu Jun 14 01:29:35 [interruptThread] closing listening socket: 34
m30004| Thu Jun 14 01:29:35 [interruptThread] removing socket file: /tmp/mongodb-30004.sock
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: going to flush diaglog...
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: going to close sockets...
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: waiting for fs preallocator...
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: closing all files...
m30004| Thu Jun 14 01:29:35 [interruptThread] closeAllFiles() finished
m30004| Thu Jun 14 01:29:35 [interruptThread] shutdown: removing fs lock...
m30004| Thu Jun 14 01:29:35 dbexit: really exiting now
m30005| Thu Jun 14 01:29:35 [conn4] end connection 127.0.0.1:36114 (4 connections now open)
Thu Jun 14 01:29:36 shell: stopped mongo program on port 30004
m30005| Thu Jun 14 01:29:36 got signal 15 (Terminated), will terminate after current cmd ends
m30005| Thu Jun 14 01:29:36 [interruptThread] now exiting
m30005| Thu Jun 14 01:29:36 dbexit:
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: going to close listening sockets...
m30005| Thu Jun 14 01:29:36 [interruptThread] closing listening socket: 35
m30005| Thu Jun 14 01:29:36 [interruptThread] closing listening socket: 36
m30005| Thu Jun 14 01:29:36 [interruptThread] closing listening socket: 37
m30005| Thu Jun 14 01:29:36 [interruptThread] removing socket file: /tmp/mongodb-30005.sock
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: going to flush diaglog...
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: going to close sockets...
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: waiting for fs preallocator...
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: closing all files...
m30005| Thu Jun 14 01:29:36 [interruptThread] closeAllFiles() finished
m30005| Thu Jun 14 01:29:36 [interruptThread] shutdown: removing fs lock...
m30005| Thu Jun 14 01:29:36 dbexit: really exiting now
m30006| Thu Jun 14 01:29:36 [conn4] end connection 127.0.0.1:57822 (4 connections now open)
Thu Jun 14 01:29:37 shell: stopped mongo program on port 30005
m30006| Thu Jun 14 01:29:37 got signal 15 (Terminated), will terminate after current cmd ends
m30006| Thu Jun 14 01:29:37 [interruptThread] now exiting
m30006| Thu Jun 14 01:29:37 dbexit:
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: going to close listening sockets...
m30006| Thu Jun 14 01:29:37 [interruptThread] closing listening socket: 38
m30006| Thu Jun 14 01:29:37 [interruptThread] closing listening socket: 39
m30006| Thu Jun 14 01:29:37 [interruptThread] closing listening socket: 40
m30006| Thu Jun 14 01:29:37 [interruptThread] removing socket file: /tmp/mongodb-30006.sock
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: going to flush diaglog...
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: going to close sockets...
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: waiting for fs preallocator...
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: closing all files...
m30006| Thu Jun 14 01:29:37 [interruptThread] closeAllFiles() finished
m30006| Thu Jun 14 01:29:37 [interruptThread] shutdown: removing fs lock...
m30006| Thu Jun 14 01:29:37 dbexit: really exiting now
m30007| Thu Jun 14 01:29:37 [conn7] end connection 127.0.0.1:56540 (4 connections now open)
Thu Jun 14 01:29:38 shell: stopped mongo program on port 30006
m30007| Thu Jun 14 01:29:38 got signal 15 (Terminated), will terminate after current cmd ends
m30007| Thu Jun 14 01:29:38 [interruptThread] now exiting
m30007| Thu Jun 14 01:29:38 dbexit:
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: going to close listening sockets...
m30007| Thu Jun 14 01:29:38 [interruptThread] closing listening socket: 41
m30007| Thu Jun 14 01:29:38 [interruptThread] closing listening socket: 42
m30007| Thu Jun 14 01:29:38 [interruptThread] closing listening socket: 43
m30007| Thu Jun 14 01:29:38 [interruptThread] removing socket file: /tmp/mongodb-30007.sock
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: going to flush diaglog...
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: going to close sockets...
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: waiting for fs preallocator...
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: closing all files...
m30007| Thu Jun 14 01:29:38 [interruptThread] closeAllFiles() finished
m30007| Thu Jun 14 01:29:38 [interruptThread] shutdown: removing fs lock...
m30007| Thu Jun 14 01:29:38 dbexit: really exiting now
m30008| Thu Jun 14 01:29:38 [conn7] end connection 127.0.0.1:51985 (4 connections now open)
Thu Jun 14 01:29:39 shell: stopped mongo program on port 30007
m30008| Thu Jun 14 01:29:39 got signal 15 (Terminated), will terminate after current cmd ends
m30008| Thu Jun 14 01:29:39 [interruptThread] now exiting
m30008| Thu Jun 14 01:29:39 dbexit:
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: going to close listening sockets...
m30008| Thu Jun 14 01:29:39 [interruptThread] closing listening socket: 44
m30008| Thu Jun 14 01:29:39 [interruptThread] closing listening socket: 45
m30008| Thu Jun 14 01:29:39 [interruptThread] closing listening socket: 46
m30008| Thu Jun 14 01:29:39 [interruptThread] removing socket file: /tmp/mongodb-30008.sock
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: going to flush diaglog...
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: going to close sockets...
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: waiting for fs preallocator...
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: closing all files...
m30008| Thu Jun 14 01:29:39 [interruptThread] closeAllFiles() finished
m30008| Thu Jun 14 01:29:39 [interruptThread] shutdown: removing fs lock...
m30008| Thu Jun 14 01:29:39 dbexit: really exiting now
m30009| Thu Jun 14 01:29:39 [conn7] end connection 127.0.0.1:46371 (3 connections now open)
Thu Jun 14 01:29:40 shell: stopped mongo program on port 30008
m30009| Thu Jun 14 01:29:40 got signal 15 (Terminated), will terminate after current cmd ends
m30009| Thu Jun 14 01:29:40 [interruptThread] now exiting
m30009| Thu Jun 14 01:29:40 dbexit:
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: going to close listening sockets...
m30009| Thu Jun 14 01:29:40 [interruptThread] closing listening socket: 47
m30009| Thu Jun 14 01:29:40 [interruptThread] closing listening socket: 48
m30009| Thu Jun 14 01:29:40 [interruptThread] closing listening socket: 49
m30009| Thu Jun 14 01:29:40 [interruptThread] removing socket file: /tmp/mongodb-30009.sock
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: going to flush diaglog...
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: going to close sockets...
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: waiting for fs preallocator...
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: closing all files...
m30009| Thu Jun 14 01:29:40 [interruptThread] closeAllFiles() finished
m30009| Thu Jun 14 01:29:40 [interruptThread] shutdown: removing fs lock...
m30009| Thu Jun 14 01:29:40 dbexit: really exiting now
Thu Jun 14 01:29:41 shell: stopped mongo program on port 30009
*** ShardingTest test completed successfully in 39.799 seconds ***
39860.694170ms
Thu Jun 14 01:29:41 [initandlisten] connection accepted from 127.0.0.1:59189 #13 (12 connections now open)
*******************************************
Test : coll_epoch_test0.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test0.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test0.js";TestData.testFile = "coll_epoch_test0.js";TestData.testName = "coll_epoch_test0";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:29:41 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:29:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:29:41
m30000| Thu Jun 14 01:29:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:29:41
m30000| Thu Jun 14 01:29:41 [initandlisten] MongoDB starting : pid=23010 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:29:41 [initandlisten]
m30000| Thu Jun 14 01:29:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:29:41 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:29:41 [initandlisten]
m30000| Thu Jun 14 01:29:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:29:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:29:41 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:29:41 [initandlisten]
m30000| Thu Jun 14 01:29:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:29:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:29:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:29:41 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:29:41 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:29:41 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:29:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:29:41 [initandlisten] connection accepted from 127.0.0.1:39190 #1 (1 connection now open)
m30001| Thu Jun 14 01:29:41
m30001| Thu Jun 14 01:29:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:29:41
m30001| Thu Jun 14 01:29:41 [initandlisten] MongoDB starting : pid=23023 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:29:41 [initandlisten]
m30001| Thu Jun 14 01:29:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:29:41 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:29:41 [initandlisten]
m30001| Thu Jun 14 01:29:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:29:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:29:41 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:29:41 [initandlisten]
m30001| Thu Jun 14 01:29:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:29:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:29:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:29:41 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:29:41 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:29:41 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:29:41 [initandlisten] connection accepted from 127.0.0.1:59082 #1 (1 connection now open)
m30000| Thu Jun 14 01:29:41 [initandlisten] connection accepted from 127.0.0.1:39193 #2 (2 connections now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:29:41 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:41 [FileAllocator] creating directory /data/db/test0/_tmp
Thu Jun 14 01:29:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Thu Jun 14 01:29:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:29:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23038 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:29:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:29:41 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:29:41 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:29:41 [initandlisten] connection accepted from 127.0.0.1:39195 #3 (3 connections now open)
m30000| Thu Jun 14 01:29:41 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.246 secs
m30000| Thu Jun 14 01:29:41 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:29:42 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.317 secs
m30000| Thu Jun 14 01:29:42 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:29:42 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn2] insert config.settings keyUpdates:0 locks(micros) w:585161 585ms
m30000| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:39198 #4 (4 connections now open)
m30000| Thu Jun 14 01:29:42 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:39200 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:42 [conn5] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:29:42 [conn5] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:29:42 [conn5] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:42 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:29:42 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:29:42 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:29:42 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:29:42
m30999| Thu Jun 14 01:29:42 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:29:42 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:39201 #6 (6 connections now open)
m30999| Thu Jun 14 01:29:42 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651782:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:29:42 [conn5] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn5] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 1 total records. 0 secs
m30000| Thu Jun 14 01:29:42 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:42 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651782:1804289383' acquired, ts : 4fd976c6137b6769b293c978
m30999| Thu Jun 14 01:29:42 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651782:1804289383' unlocked.
m30999| Thu Jun 14 01:29:42 [websvr] admin web console waiting for connections on port 31999
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:29:42 [mongosMain] connection accepted from 127.0.0.1:51289 #1 (1 connection now open)
m30999| Thu Jun 14 01:29:42 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:29:42 [conn5] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:42 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:29:42 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.576 secs
m30000| Thu Jun 14 01:29:42 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:5 r:323 w:1337 reslen:177 343ms
m30999| Thu Jun 14 01:29:42 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:59093 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:42 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30000| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:39204 #7 (7 connections now open)
m30999| Thu Jun 14 01:29:42 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976c6137b6769b293c977
m30001| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:59095 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:42 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976c6137b6769b293c977
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:29:42 [conn] couldn't find database [foo] in config db
m30001| Thu Jun 14 01:29:42 [initandlisten] connection accepted from 127.0.0.1:59096 #4 (4 connections now open)
m30999| Thu Jun 14 01:29:42 [conn] put [foo] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:29:42 [conn] enabling sharding on: foo
m30999| Thu Jun 14 01:29:42 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:29:42 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:29:42 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976c6137b6769b293c979
m30001| Thu Jun 14 01:29:42 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:29:42 [FileAllocator] creating directory /data/db/test1/_tmp
m30999| Thu Jun 14 01:29:42 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd976c6137b6769b293c979 based on: (empty)
m30000| Thu Jun 14 01:29:42 [conn5] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:29:42 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:42 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30001| Thu Jun 14 01:29:43 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.28 secs
m30001| Thu Jun 14 01:29:43 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:29:43 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.291 secs
m30001| Thu Jun 14 01:29:43 [conn4] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:29:43 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:43 [conn4] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:29:43 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:74 r:245 w:589909 589ms
m30001| Thu Jun 14 01:29:43 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976c6137b6769b293c979'), serverID: ObjectId('4fd976c6137b6769b293c977'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:60 reslen:171 588ms
m30001| Thu Jun 14 01:29:43 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:29:43 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:29:43 [initandlisten] connection accepted from 127.0.0.1:39207 #8 (8 connections now open)
4fd976c6137b6769b293c979
m30999| Thu Jun 14 01:29:43 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:29:43 [initandlisten] connection accepted from 127.0.0.1:39208 #9 (9 connections now open)
m30001| Thu Jun 14 01:29:43 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:43 [conn4] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:43 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651783:936631973' acquired, ts : 4fd976c723820589cad06f39
m30001| Thu Jun 14 01:29:43 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651783:936631973 (sleeping for 30000ms)
m30001| Thu Jun 14 01:29:43 [conn4] splitChunk accepted at version 1|0||4fd976c6137b6769b293c979
m30001| Thu Jun 14 01:29:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:43-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59096", time: new Date(1339651783481), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd976c6137b6769b293c979') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd976c6137b6769b293c979') } } }
m30001| Thu Jun 14 01:29:43 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651783:936631973' unlocked.
m30000| Thu Jun 14 01:29:43 [initandlisten] connection accepted from 127.0.0.1:39209 #10 (10 connections now open)
m30999| Thu Jun 14 01:29:43 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd976c6137b6769b293c979 based on: 1|0||4fd976c6137b6769b293c979
{ "ok" : 1 }
4fd976c6137b6769b293c979
4fd976c6137b6769b293c979
{ "ok" : 0, "errmsg" : "that chunk is already on that shard" }
m30999| Thu Jun 14 01:29:43 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
4fd976c6137b6769b293c979
4fd976c6137b6769b293c979
m30999| Thu Jun 14 01:29:43 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:29:43 [conn3] end connection 127.0.0.1:39195 (9 connections now open)
m30000| Thu Jun 14 01:29:43 [conn5] end connection 127.0.0.1:39200 (8 connections now open)
m30000| Thu Jun 14 01:29:43 [conn6] end connection 127.0.0.1:39201 (7 connections now open)
m30000| Thu Jun 14 01:29:43 [conn7] end connection 127.0.0.1:39204 (6 connections now open)
m30001| Thu Jun 14 01:29:43 [conn3] end connection 127.0.0.1:59095 (3 connections now open)
m30001| Thu Jun 14 01:29:43 [conn4] end connection 127.0.0.1:59096 (2 connections now open)
m30001| Thu Jun 14 01:29:44 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.578 secs
Thu Jun 14 01:29:44 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:29:44 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:29:44 [interruptThread] now exiting
m30000| Thu Jun 14 01:29:44 dbexit:
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:29:44 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:29:44 [interruptThread] closing listening socket: 22
m30000| Thu Jun 14 01:29:44 [interruptThread] closing listening socket: 23
m30000| Thu Jun 14 01:29:44 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:29:44 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:29:44 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:29:44 dbexit: really exiting now
Thu Jun 14 01:29:45 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:29:45 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:29:45 [interruptThread] now exiting
m30001| Thu Jun 14 01:29:45 dbexit:
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:29:45 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:29:45 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:29:45 [interruptThread] closing listening socket: 26
m30001| Thu Jun 14 01:29:45 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:29:45 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:29:45 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:29:45 dbexit: really exiting now
Thu Jun 14 01:29:46 shell: stopped mongo program on port 30001
*** ShardingTest test completed successfully in 5.222 seconds ***
5278.566122ms
Thu Jun 14 01:29:46 [conn2] end connection 127.0.0.1:42080 (11 connections now open)
Thu Jun 14 01:29:46 [conn3] end connection 127.0.0.1:42161 (10 connections now open)
Thu Jun 14 01:29:46 [conn4] end connection 127.0.0.1:42173 (9 connections now open)
Thu Jun 14 01:29:46 [conn5] end connection 127.0.0.1:42257 (11 connections now open)
Thu Jun 14 01:29:46 [conn6] end connection 127.0.0.1:42297 (7 connections now open)
Thu Jun 14 01:29:46 [conn8] end connection 127.0.0.1:58919 (7 connections now open)
Thu Jun 14 01:29:46 [conn7] end connection 127.0.0.1:42323 (5 connections now open)
Thu Jun 14 01:29:46 [conn9] end connection 127.0.0.1:58950 (4 connections now open)
Thu Jun 14 01:29:46 [conn10] end connection 127.0.0.1:58974 (3 connections now open)
Thu Jun 14 01:29:46 [conn11] end connection 127.0.0.1:59033 (4 connections now open)
Thu Jun 14 01:29:46 [conn12] end connection 127.0.0.1:59050 (1 connection now open)
Thu Jun 14 01:29:46 [conn13] end connection 127.0.0.1:59189 (0 connections now open)
Thu Jun 14 01:29:46 [initandlisten] connection accepted from 127.0.0.1:59211 #14 (1 connection now open)
*******************************************
Test : coll_epoch_test1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test1.js";TestData.testFile = "coll_epoch_test1.js";TestData.testName = "coll_epoch_test1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:29:46 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:29:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:29:46
m30000| Thu Jun 14 01:29:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:29:46
m30000| Thu Jun 14 01:29:46 [initandlisten] MongoDB starting : pid=23081 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:29:46 [initandlisten]
m30000| Thu Jun 14 01:29:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:29:46 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:29:46 [initandlisten]
m30000| Thu Jun 14 01:29:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:29:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:29:46 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:29:46 [initandlisten]
m30000| Thu Jun 14 01:29:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:29:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:29:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:29:46 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:29:46 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:29:46 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:29:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:29:46 [initandlisten] connection accepted from 127.0.0.1:39212 #1 (1 connection now open)
m30001| Thu Jun 14 01:29:46
m30001| Thu Jun 14 01:29:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:29:46
m30001| Thu Jun 14 01:29:46 [initandlisten] MongoDB starting : pid=23094 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:29:46 [initandlisten]
m30001| Thu Jun 14 01:29:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:29:46 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:29:46 [initandlisten]
m30001| Thu Jun 14 01:29:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:29:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:29:46 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:29:46 [initandlisten]
m30001| Thu Jun 14 01:29:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:29:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:29:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:29:46 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:29:46 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:29:46 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test2'
Thu Jun 14 01:29:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/test2
m30001| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:59104 #1 (1 connection now open)
m30002| Thu Jun 14 01:29:47
m30002| Thu Jun 14 01:29:47 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:29:47
m30002| Thu Jun 14 01:29:47 [initandlisten] MongoDB starting : pid=23107 port=30002 dbpath=/data/db/test2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:29:47 [initandlisten]
m30002| Thu Jun 14 01:29:47 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:29:47 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:29:47 [initandlisten]
m30002| Thu Jun 14 01:29:47 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:29:47 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:29:47 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:29:47 [initandlisten]
m30002| Thu Jun 14 01:29:47 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:29:47 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:29:47 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:29:47 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
m30002| Thu Jun 14 01:29:47 [websvr] admin web console waiting for connections on port 31002
m30002| Thu Jun 14 01:29:47 [initandlisten] waiting for connections on port 30002
"localhost:30000"
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:29:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30002| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:46735 #1 (1 connection now open)
m30999| Thu Jun 14 01:29:47 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:29:47 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23121 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:29:47 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:29:47 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:29:47 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:29:47 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:29:47 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:47 [mongosMain] connected connection!
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39217 #2 (2 connections now open)
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39218 #3 (3 connections now open)
m30000| Thu Jun 14 01:29:47 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:47 [FileAllocator] creating directory /data/db/test0/_tmp
m30000| Thu Jun 14 01:29:47 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.248 secs
m30000| Thu Jun 14 01:29:47 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:29:47 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.318 secs
m30000| Thu Jun 14 01:29:47 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:29:47 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn2] insert config.settings keyUpdates:0 locks(micros) w:579087 578ms
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39222 #4 (4 connections now open)
m30000| Thu Jun 14 01:29:47 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39223 #5 (5 connections now open)
m30000| Thu Jun 14 01:29:47 [conn5] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn5] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:29:47 [conn5] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn5] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn5] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:29:47 [conn5] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:29:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39224 #6 (6 connections now open)
m30000| Thu Jun 14 01:29:47 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:29:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:47 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:29:47 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:29:47 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:47 [mongosMain] connected connection!
m30999| Thu Jun 14 01:29:47 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:47 [mongosMain] connected connection!
m30999| Thu Jun 14 01:29:47 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:29:47 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:29:47 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:29:47 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:29:47 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:29:47 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:29:47 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:29:47 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:29:47
m30999| Thu Jun 14 01:29:47 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:47 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:47 [Balancer] connected connection!
m30999| Thu Jun 14 01:29:47 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:29:47 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:29:47 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:47 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd976cbb766b48292746858" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:29:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976cbb766b48292746858
m30999| Thu Jun 14 01:29:47 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:29:47 [Balancer] no collections to balance
m30999| Thu Jun 14 01:29:47 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:29:47 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:29:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
m30999| Thu Jun 14 01:29:47 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651787:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:47 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:29:47 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651787:1804289383', sleeping for 30000ms
Thu Jun 14 01:29:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:29:47 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:29:47 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23143 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:29:47 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:29:47 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:29:47 [mongosMain] options: { configdb: "localhost:30000", port: 30998, verbose: true }
m30998| Thu Jun 14 01:29:47 [mongosMain] config string : localhost:30000
m30998| Thu Jun 14 01:29:47 [mongosMain] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:47 [mongosMain] connection accepted from 127.0.0.1:51312 #1 (1 connection now open)
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39226 #7 (7 connections now open)
m30998| Thu Jun 14 01:29:47 [mongosMain] connected connection!
m30998| Thu Jun 14 01:29:47 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:29:47 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:29:47 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:29:47 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:29:47 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:29:47 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:29:47 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39228 #8 (8 connections now open)
m30998| Thu Jun 14 01:29:47 [Balancer] connected connection!
m30998| Thu Jun 14 01:29:47 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:29:47 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:29:47
m30998| Thu Jun 14 01:29:47 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:29:47 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:47 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:47 [initandlisten] connection accepted from 127.0.0.1:39229 #9 (9 connections now open)
m30998| Thu Jun 14 01:29:47 [Balancer] connected connection!
m30998| Thu Jun 14 01:29:47 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:29:47 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651787:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651787:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651787:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:29:47 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd976cbcd27128ed08b4480" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd976cbb766b48292746858" } }
m30998| Thu Jun 14 01:29:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651787:1804289383' acquired, ts : 4fd976cbcd27128ed08b4480
m30998| Thu Jun 14 01:29:47 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:29:47 [Balancer] no collections to balance
m30998| Thu Jun 14 01:29:47 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:29:47 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:29:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651787:1804289383' unlocked.
m30998| Thu Jun 14 01:29:47 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339651787:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:29:47 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:29:47 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30998:1339651787:1804289383', sleeping for 30000ms
Thu Jun 14 01:29:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30997 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:29:48 [mongosMain] connection accepted from 127.0.0.1:35649 #1 (1 connection now open)
m30997| Thu Jun 14 01:29:48 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30997| Thu Jun 14 01:29:48 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23160 port=30997 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30997| Thu Jun 14 01:29:48 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30997| Thu Jun 14 01:29:48 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30997| Thu Jun 14 01:29:48 [mongosMain] options: { configdb: "localhost:30000", port: 30997, verbose: true }
m30997| Thu Jun 14 01:29:48 [mongosMain] config string : localhost:30000
m30997| Thu Jun 14 01:29:48 [mongosMain] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39232 #10 (10 connections now open)
m30997| Thu Jun 14 01:29:48 [mongosMain] connected connection!
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: CheckConfigServers
m30997| Thu Jun 14 01:29:48 [CheckConfigServers] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:48 [mongosMain] MaxChunkSize: 50
m30997| Thu Jun 14 01:29:48 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30997| Thu Jun 14 01:29:48 [websvr] admin web console waiting for connections on port 31997
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39233 #11 (11 connections now open)
m30997| Thu Jun 14 01:29:48 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30997| Thu Jun 14 01:29:48 [mongosMain] waiting for connections on port 30997
m30997| Thu Jun 14 01:29:48 [CheckConfigServers] connected connection!
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: Balancer
m30997| Thu Jun 14 01:29:48 [Balancer] about to contact config servers and shards
m30997| Thu Jun 14 01:29:48 [Balancer] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: cursorTimeout
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39234 #12 (12 connections now open)
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:48 [Balancer] connected connection!
m30997| Thu Jun 14 01:29:48 [Balancer] config servers and shards contacted successfully
m30997| Thu Jun 14 01:29:48 [Balancer] balancer id: domU-12-31-39-01-70-B4:30997 started at Jun 14 01:29:48
m30997| Thu Jun 14 01:29:48 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30997| Thu Jun 14 01:29:48 [Balancer] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39235 #13 (13 connections now open)
m30997| Thu Jun 14 01:29:48 [Balancer] connected connection!
m30997| Thu Jun 14 01:29:48 [Balancer] Refreshing MaxChunkSize: 50
m30997| Thu Jun 14 01:29:48 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651788:1804289383:
m30997| { "state" : 1,
m30997| "who" : "domU-12-31-39-01-70-B4:30997:1339651788:1804289383:Balancer:846930886",
m30997| "process" : "domU-12-31-39-01-70-B4:30997:1339651788:1804289383",
m30997| "when" : { "$date" : "Thu Jun 14 01:29:48 2012" },
m30997| "why" : "doing balance round",
m30997| "ts" : { "$oid" : "4fd976cc58d88aba7b3e681b" } }
m30997| { "_id" : "balancer",
m30997| "state" : 0,
m30997| "ts" : { "$oid" : "4fd976cbcd27128ed08b4480" } }
m30997| Thu Jun 14 01:29:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651788:1804289383' acquired, ts : 4fd976cc58d88aba7b3e681b
m30997| Thu Jun 14 01:29:48 [Balancer] *** start balancing round
m30997| Thu Jun 14 01:29:48 [Balancer] no collections to balance
m30997| Thu Jun 14 01:29:48 [Balancer] no need to move any chunk
m30997| Thu Jun 14 01:29:48 [Balancer] *** end of balancing round
m30997| Thu Jun 14 01:29:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651788:1804289383' unlocked.
m30997| Thu Jun 14 01:29:48 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30997:1339651788:1804289383 (sleeping for 30000ms)
m30997| Thu Jun 14 01:29:48 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:29:48 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30997:1339651788:1804289383', sleeping for 30000ms
m30997| Thu Jun 14 01:29:48 [mongosMain] connection accepted from 127.0.0.1:52073 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:29:48 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:29:48 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:29:48 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:29:48 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:48 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.583 secs
m30000| Thu Jun 14 01:29:48 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:319 w:1413 reslen:177 129ms
m30999| Thu Jun 14 01:29:48 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
{ "shardAdded" : "shard0002", "ok" : 1 }
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39239 #14 (14 connections now open)
m30002| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:46757 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30002| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:46760 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30002
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976cbb766b48292746857
m30999| Thu Jun 14 01:29:48 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976cbb766b48292746857
m30999| Thu Jun 14 01:29:48 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30002
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd976cbb766b48292746857
m30999| Thu Jun 14 01:29:48 [conn] initializing shard connection to localhost:30002
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: WriteBackListener-localhost:30002
m30001| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:59127 #2 (2 connections now open)
m30001| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:59130 #3 (3 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30997| Thu Jun 14 01:29:48 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30000| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:39242 #15 (15 connections now open)
----
Enabling sharding for the first time...
----
m30002| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:46763 #4 (4 connections now open)
m30001| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:59133 #4 (4 connections now open)
m30999| Thu Jun 14 01:29:48 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30001| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:59135 #5 (5 connections now open)
m30999| Thu Jun 14 01:29:48 [conn] creating new connection to:localhost:30002
m30999| Thu Jun 14 01:29:48 BackgroundJob starting: ConnectBG
m30002| Thu Jun 14 01:29:48 [initandlisten] connection accepted from 127.0.0.1:46765 #5 (5 connections now open)
m30999| Thu Jun 14 01:29:48 [conn] connected connection!
m30999| Thu Jun 14 01:29:48 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:29:48 [conn] put [foo] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:29:48 [conn] enabling sharding on: foo
m30999| Thu Jun 14 01:29:48 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:29:48 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:29:48 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976ccb766b48292746859
m30999| Thu Jun 14 01:29:48 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd976ccb766b48292746859 based on: (empty)
m30001| Thu Jun 14 01:29:48 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:48 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:29:48 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:48 [FileAllocator] creating directory /data/db/test1/_tmp
m30999| Thu Jun 14 01:29:48 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30999| Thu Jun 14 01:29:48 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0000", shardHost: "localhost:30000" } 0x8c17520
m30999| Thu Jun 14 01:29:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:29:48 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30001| Thu Jun 14 01:29:48 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.311 secs
m30001| Thu Jun 14 01:29:48 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:29:49 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.296 secs
m30001| Thu Jun 14 01:29:49 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:29:49 [conn5] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:29:49 [conn5] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:49 [conn5] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:29:49 [conn5] insert foo.system.indexes keyUpdates:0 locks(micros) W:97 r:251 w:622992 622ms
m30001| Thu Jun 14 01:29:49 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:61 reslen:171 621ms
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30001| Thu Jun 14 01:29:49 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30002, version is zero
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0002 localhost:30002 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0002", shardHost: "localhost:30002" } 0x8c19010
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30997| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd976ccb766b48292746859 based on: (empty)
m30997| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976cc58d88aba7b3e681a
m30997| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30000
m30997| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0000", shardHost: "localhost:30000" } 0x93053d0
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30000
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976cc58d88aba7b3e681a
m30997| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30001
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9305be8
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30002
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30001
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd976cc58d88aba7b3e681a
m30997| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30002
m30997| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30002, version is zero
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0002 localhost:30002 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0002", shardHost: "localhost:30002" } 0x9307740
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30002
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30002] creating new connection to:localhost:30002
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 3971207 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 210 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 210 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 210 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 210 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 210 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connected connection!
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30002] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd976ccb766b48292746859 based on: 1|0||4fd976ccb766b48292746859
m30997| Thu Jun 14 01:29:49 [conn] autosplitted foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9305be8
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 4807965 splitThreshold: 471859
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59139 #6 (6 connections now open)
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59141 #7 (7 connections now open)
m30001| Thu Jun 14 01:29:49 [conn7] request split points lookup for chunk foo.bar { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:29:49 [conn7] max number of requested split points reached (2) before the end of chunk foo.bar { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59142 #8 (8 connections now open)
m30001| Thu Jun 14 01:29:49 [conn7] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:29:49 [conn7] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:29:49 [conn7] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651789:1497076156' acquired, ts : 4fd976cd9aa5e9fe0af4a3e7
m30001| Thu Jun 14 01:29:49 [conn7] splitChunk accepted at version 1|0||4fd976ccb766b48292746859
m30001| Thu Jun 14 01:29:49 [conn7] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:49-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59141", time: new Date(1339651789065), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd976ccb766b48292746859') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd976ccb766b48292746859') } } }
m30001| Thu Jun 14 01:29:49 [conn7] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651789:1497076156' unlocked.
m30001| Thu Jun 14 01:29:49 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651789:1497076156 (sleeping for 30000ms)
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59146 #9 (9 connections now open)
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59148 #10 (10 connections now open)
m30998| Thu Jun 14 01:29:49 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30998| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|2||4fd976ccb766b48292746859 based on: (empty)
m30998| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:29:49 [conn] connected connection!
m30998| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976cbcd27128ed08b447f
m30998| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), shard: "shard0000", shardHost: "localhost:30000" } 0x927e538
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:29:49 [conn] connected connection!
m30998| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976cbcd27128ed08b447f
m30998| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd976ccb766b48292746859'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), shard: "shard0001", shardHost: "localhost:30001" } 0x927ed68
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:29:49 [conn] creating new connection to:localhost:30002
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30001
m30998| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:29:49 [conn] connected connection!
m30998| Thu Jun 14 01:29:49 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd976cbcd27128ed08b447f
m30998| Thu Jun 14 01:29:49 [conn] initializing shard connection to localhost:30002
m30998| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30002, version is zero
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion shard0002 localhost:30002 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), shard: "shard0002", shardHost: "localhost:30002" } 0x92807d0
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: WriteBackListener-localhost:30002
m30998| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30002] creating new connection to:localhost:30002
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connected connection!
m30998| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30002] connected connection!
m30002| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:46769 #6 (6 connections now open)
m30002| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:46772 #7 (7 connections now open)
m30002| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:46776 #8 (8 connections now open)
m30002| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:46778 #9 (9 connections now open)
m30000| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:39247 #16 (16 connections now open)
m30000| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:39248 #17 (17 connections now open)
m30000| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:39254 #18 (18 connections now open)
m30000| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:39255 #19 (19 connections now open)
m30999| Thu Jun 14 01:29:49 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:29:49 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:49-0", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651789150), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:29:49 [conn] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:49 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:49 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685a" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976cd9aa5e9fe0af4a3e7" } }
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976cdb766b4829274685a
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar all locked
m30001| Thu Jun 14 01:29:49 [conn5] CMD: drop foo.bar
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c1c408
m30001| Thu Jun 14 01:29:49 [conn5] wiping data for: foo.bar
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:29:49 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:49-1", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651789159), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
----
Re-enabling sharding with a different key...
----
m30999| Thu Jun 14 01:29:49 [conn] sharded index write for foo.system.indexes
m30001| Thu Jun 14 01:29:49 [conn3] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:29:49 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:49 [conn3] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:29:49 [conn3] build index foo.bar { notId: 1.0 }
m30001| Thu Jun 14 01:29:49 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:49 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { notId: 1.0 } }
m30999| Thu Jun 14 01:29:49 [conn] enable sharding on: foo.bar with shard key: { notId: 1.0 }
m30999| Thu Jun 14 01:29:49 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976cdb766b4829274685b
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|0||4fd976cdb766b4829274685b based on: (empty)
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30001| Thu Jun 14 01:29:49 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:39260 #20 (20 connections now open)
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000000'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000000 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|2||4fd976ccb766b48292746859
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a738'), notId: 0.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] warning: reloading config data for foo, wanted version 1|0||4fd976cdb766b4829274685b but currently have version 1|2||4fd976ccb766b48292746859
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|0||4fd976cdb766b4829274685b based on: (empty)
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9305be8
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion failed!
m30997| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connected connection!
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9305be8
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30000
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] resetting shard version of foo.bar on localhost:30000, version is zero
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0000", shardHost: "localhost:30000" } 0x93069f0
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { notId: MinKey } max: { notId: MaxKey } dataWritten: 6853755 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connected connection!
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30001
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9309d08
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { notId: MinKey } max: { notId: MaxKey } dataWritten: 196 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59151 #11 (11 connections now open)
m30997| Thu Jun 14 01:29:49 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { notId: MinKey } max: { notId: MaxKey } dataWritten: 196 splitThreshold: 921
m30997| Thu Jun 14 01:29:49 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] creating new connection to:localhost:30002
m30997| Thu Jun 14 01:29:49 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connected connection!
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30002
m30002| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:46781 #10 (10 connections now open)
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] resetting shard version of foo.bar on localhost:30002, version is zero
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion shard0002 localhost:30002 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0002", shardHost: "localhost:30002" } 0x930a838
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000001'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000001 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a739'), notId: 1.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000002'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000002 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73a'), notId: 2.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000003'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000003 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73b'), notId: 3.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000004'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000004 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73c'), notId: 4.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000005'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000005 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73d'), notId: 5.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000006'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000006 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73e'), notId: 6.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000007'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000007 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a73f'), notId: 7.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000008'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000008 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a740'), notId: 8.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000009'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000009 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a741'), notId: 9.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a742'), notId: 10.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000b'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000b needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a743'), notId: 11.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000c'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000c needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a744'), notId: 12.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000d'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000d needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a745'), notId: 13.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000e'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000e needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a746'), notId: 14.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000000f'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000000f needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a747'), notId: 15.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000010'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000010 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a748'), notId: 16.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000011'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000011 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a749'), notId: 17.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000012'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000012 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74a'), notId: 18.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000013'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000013 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74b'), notId: 19.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000014'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000014 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74c'), notId: 20.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000015'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000015 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74d'), notId: 21.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000016'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000016 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74e'), notId: 22.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000017'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000017 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a74f'), notId: 23.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000018'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000018 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a750'), notId: 24.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000019'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000019 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a751'), notId: 25.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a752'), notId: 26.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001b'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001b needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a753'), notId: 27.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001c'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001c needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a754'), notId: 28.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001d'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001d needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a755'), notId: 29.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001e'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001e needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a756'), notId: 30.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000001f'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000001f needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a757'), notId: 31.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000020'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000020 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a758'), notId: 32.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000021'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000021 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a759'), notId: 33.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000022'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000022 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75a'), notId: 34.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000023'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000023 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75b'), notId: 35.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000024'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000024 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75c'), notId: 36.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000025'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000025 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75d'), notId: 37.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000026'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000026 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75e'), notId: 38.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000027'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000027 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a75f'), notId: 39.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000028'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000028 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a760'), notId: 40.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000029'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000029 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a761'), notId: 41.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a762'), notId: 42.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002b'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002b needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a763'), notId: 43.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002c'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002c needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a764'), notId: 44.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002d'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002d needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a765'), notId: 45.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002e'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002e needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a766'), notId: 46.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000002f'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000002f needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a767'), notId: 47.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000030'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000030 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a768'), notId: 48.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000031'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000031 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a769'), notId: 49.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000032'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000032 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76a'), notId: 50.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000033'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000033 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76b'), notId: 51.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000034'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000034 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76c'), notId: 52.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000035'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000035 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76d'), notId: 53.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000036'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000036 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76e'), notId: 54.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000037'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000037 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a76f'), notId: 55.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000038'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000038 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a770'), notId: 56.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000039'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000039 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a771'), notId: 57.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a772'), notId: 58.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003b'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003b needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a773'), notId: 59.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003c'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003c needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a774'), notId: 60.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003d'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003d needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a775'), notId: 61.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003e'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003e needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a776'), notId: 62.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000003f'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000003f needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a777'), notId: 63.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000040'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000040 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a778'), notId: 64.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000041'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000041 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a779'), notId: 65.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000042'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000042 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77a'), notId: 66.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000043'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000043 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77b'), notId: 67.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000044'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000044 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77c'), notId: 68.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000045'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000045 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77d'), notId: 69.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000046'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000046 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77e'), notId: 70.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000047'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000047 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a77f'), notId: 71.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000048'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000048 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a780'), notId: 72.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000049'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000049 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a781'), notId: 73.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a782'), notId: 74.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004b'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004b needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a783'), notId: 75.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004c'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004c needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a784'), notId: 76.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004d'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004d needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a785'), notId: 77.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004e'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004e needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a786'), notId: 78.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000004f'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000004f needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a787'), notId: 79.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000050'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000050 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a788'), notId: 80.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000051'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000051 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a789'), notId: 81.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000052'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000052 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78a'), notId: 82.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000053'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000053 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78b'), notId: 83.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000054'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000054 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78c'), notId: 84.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000055'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000055 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78d'), notId: 85.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000056'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000056 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78e'), notId: 86.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000057'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000057 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a78f'), notId: 87.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000058'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000058 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a790'), notId: 88.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd0000000000000059'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd0000000000000059 needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a791'), notId: 89.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976cd000000000000005a'), connectionId: 6, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd976ccb766b48292746859'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:6 writebackId: 4fd976cd000000000000005a needVersion : 1|0||4fd976cdb766b4829274685b mine : 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] op: insert len: 77 ns: foo.bar{ _id: ObjectId('4fd976cd7e98ecb71768a792'), notId: 90.0, test: "b" }
m30997| Thu Jun 14 01:29:49 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 1|0||4fd976cdb766b4829274685b, at version 1|0||4fd976cdb766b4829274685b
m30998| Thu Jun 14 01:29:49 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30998| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|0||4fd976cdb766b4829274685b based on: (empty)
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), shard: "shard0001", shardHost: "localhost:30001" } 0x927ed68
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 1000|2, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cdb766b4829274685b'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x927ed68
m30998| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 1000|2, oldVersionEpoch: ObjectId('4fd976ccb766b48292746859'), ok: 1.0 }
m30999| Thu Jun 14 01:29:49 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:29:49 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:49-2", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651789717), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:29:49 [conn] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:49 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:49 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685c" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685a" } }
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976cdb766b4829274685c
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar all locked
m30001| Thu Jun 14 01:29:49 [conn5] CMD: drop foo.bar
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:29:49 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c1c408
m30999| Thu Jun 14 01:29:49 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:29:49 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:49-3", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651789720), what: "dropCollection", ns: "foo.bar", details: {} }
m30001| Thu Jun 14 01:29:49 [conn5] wiping data for: foo.bar
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
----
Re-creating unsharded collection from a sharded collection on different primary...
----
m30999| Thu Jun 14 01:29:49 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30999| Thu Jun 14 01:29:49 [conn] Moving foo primary from: shard0001:localhost:30001 to: shard0000:localhost:30000
m30999| Thu Jun 14 01:29:49 [conn] created new distributed lock for foo-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:49 [conn] inserting initial doc in config.locks for lock foo-movePrimary
m30999| Thu Jun 14 01:29:49 [conn] about to acquire distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:49 2012" },
m30999| "why" : "Moving primary shard of foo",
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685d" } }
m30999| { "_id" : "foo-movePrimary",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976cdb766b4829274685d
m30001| Thu Jun 14 01:29:49 [initandlisten] connection accepted from 127.0.0.1:59153 #12 (12 connections now open)
m30999| Thu Jun 14 01:29:49 [conn] movePrimary dropping database on localhost:30001, no sharded collections in foo
m30001| Thu Jun 14 01:29:49 [conn12] end connection 127.0.0.1:59153 (11 connections now open)
m30001| Thu Jun 14 01:29:49 [conn5] dropDatabase foo
m30001| Thu Jun 14 01:29:49 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.771 secs
m30001| Thu Jun 14 01:29:49 [conn5] command foo.$cmd command: { dropDatabase: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:103374 r:555 w:624184 reslen:54 103ms
m30999| Thu Jun 14 01:29:49 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
----
moved primary...
----
m30997| Thu Jun 14 01:29:49 [conn] warning: shard key mismatch for insert { _id: ObjectId('4fd976cd7e98ecb71768a79c'), test: "c" }, expected values for { notId: 1.0 }, reloading config data to ensure not stale
m30997| Thu Jun 14 01:29:49 [conn] warning: no chunks found when reloading foo.bar, previous version was 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 0|0||000000000000000000000000 based on: 1|0||4fd976cdb766b4829274685b
m30997| Thu Jun 14 01:29:49 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:29:49 [conn] User Assertion: 10181:not sharded:foo.bar
m30997| Thu Jun 14 01:29:49 [conn] warning: DBException thrown :: caused by :: 10181 not sharded:foo.bar
m30997| 0x84f514a 0x83f32ab 0x83f379e 0x83f3956 0x82ea09c 0x82e9aa4 0x82eb113 0x822967a 0x8229fce 0x8233873 0x8420fd1 0x81223b3 0x832cd95 0xaca542 0x67fb6e
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12b) [0x83f32ab]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo9uassertedEiPKc+0xae) [0x83f379e]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos [0x83f3956]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8DBConfig15getChunkManagerERKSsbb+0x1a7c) [0x82ea09c]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8DBConfig15getChunkManagerERKSsbb+0x1484) [0x82e9aa4]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8DBConfig23getChunkManagerIfExistsERKSsbb+0x43) [0x82eb113]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13ShardStrategy13_groupInsertsERKSsRSt6vectorINS_7BSONObjESaIS4_EERSt3mapIN5boost10shared_ptrIKNS_5ChunkEEES6_St4lessISD_ESaISt4pairIKSD_S6_EEERNSA_IKNS_12ChunkManagerEEERNSA_INS_5ShardEEEb+0xf4a) [0x822967a]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13ShardStrategy7_insertERKSsRSt6vectorINS_7BSONObjESaIS4_EERSt3mapIN5boost10shared_ptrIKNS_5ChunkEEES6_St4lessISD_ESaISt4pairIKSD_S6_EEEiRNS_7RequestERNS_9DbMessageEi+0xae) [0x8229fce]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13ShardStrategy7writeOpEiRNS_7RequestE+0x553) [0x8233873]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo7Request7processEi+0xd1) [0x8420fd1]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x83) [0x81223b3]
m30997| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2d5) [0x832cd95]
m30997| /lib/i686/nosegneg/libpthread.so.0 [0xaca542]
m30997| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x67fb6e]
m30997| Thu Jun 14 01:29:49 [conn] warning: chunk manager not found for foo.bar :: caused by :: 10181 not sharded:foo.bar
m30997| Thu Jun 14 01:29:49 [conn] resetting shard version of foo.bar on localhost:30000, no longer sharded
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cc58d88aba7b3e681a'), shard: "shard0000", shardHost: "localhost:30000" } 0x93053d0
m30997| Thu Jun 14 01:29:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:29:49 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:50 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.263 secs
m30000| Thu Jun 14 01:29:50 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:29:50 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.281 secs
m30000| Thu Jun 14 01:29:50 [conn17] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:29:50 [conn17] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:50 [conn17] insert foo.bar keyUpdates:0 locks(micros) W:9 w:555778 555ms
----
waited for gle...
----
m30000| Thu Jun 14 01:29:50 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
----
Re-creating sharded collection with different primary...
----
m30000| Thu Jun 14 01:29:50 [conn14] CMD: drop foo.bar
m30000| Thu Jun 14 01:29:50 [initandlisten] connection accepted from 127.0.0.1:39264 #21 (21 connections now open)
m30000| Thu Jun 14 01:29:50 [conn21] end connection 127.0.0.1:39264 (20 connections now open)
m30000| Thu Jun 14 01:29:50 [conn6] dropDatabase foo
m30999| Thu Jun 14 01:29:50 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:29:50 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:29:50 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30999| Thu Jun 14 01:29:50 [conn] Moving foo primary from: shard0000:localhost:30000 to: shard0001:localhost:30001
m30999| Thu Jun 14 01:29:50 [conn] created new distributed lock for foo-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:50 [conn] about to acquire distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:50 2012" },
m30999| "why" : "Moving primary shard of foo",
m30999| "ts" : { "$oid" : "4fd976ceb766b4829274685e" } }
m30999| { "_id" : "foo-movePrimary",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685d" } }
m30999| Thu Jun 14 01:29:50 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976ceb766b4829274685e
m30999| Thu Jun 14 01:29:50 [conn] movePrimary dropping database on localhost:30000, no sharded collections in foo
m30998| Thu Jun 14 01:29:50 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:29:50 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:29:50 [conn] User Assertion: 10181:not sharded:foo.bar
m30998| Thu Jun 14 01:29:50 [conn] warning: chunk manager not found for foo.bar :: caused by :: 10181 not sharded:foo.bar
m30998| Thu Jun 14 01:29:50 [conn] resetting shard version of foo.bar on localhost:30000, no longer sharded
m30998| Thu Jun 14 01:29:50 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbcd27128ed08b447f'), shard: "shard0000", shardHost: "localhost:30000" } 0x927e538
m30998| Thu Jun 14 01:29:50 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:29:50 [conn] PCursor erasing empty state { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30000| Thu Jun 14 01:29:51 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.618 secs
m30000| Thu Jun 14 01:29:51 [conn6] command foo.$cmd command: { dropDatabase: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:625458 r:1517 w:3585 reslen:54 625ms
m30999| Thu Jun 14 01:29:51 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
m30999| Thu Jun 14 01:29:51 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:29:51 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:29:51 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:29:51 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976cfb766b4829274685f
m30999| Thu Jun 14 01:29:51 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|0||4fd976cfb766b4829274685f based on: (empty)
m30999| Thu Jun 14 01:29:51 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cfb766b4829274685f'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30001| Thu Jun 14 01:29:51 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:29:51 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.257 secs
m30001| Thu Jun 14 01:29:51 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:29:51 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.276 secs
m30001| Thu Jun 14 01:29:51 [conn5] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:29:51 [conn5] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:29:51 [conn5] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:29:51 [conn5] insert foo.system.indexes keyUpdates:0 locks(micros) W:103374 r:673 w:1169076 544ms
m30001| Thu Jun 14 01:29:51 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cfb766b4829274685f'), serverID: ObjectId('4fd976cbb766b48292746857'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:86 w:948 reslen:171 543ms
m30001| Thu Jun 14 01:29:51 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:29:51 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:29:51 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30999| Thu Jun 14 01:29:51 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976cdb766b4829274685b'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:29:51 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976cfb766b4829274685f'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c18060
m30999| Thu Jun 14 01:29:51 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:29:52 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.906 secs
m30000| Thu Jun 14 01:29:52 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.898 secs
m30000| Thu Jun 14 01:29:52 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:29:52 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.274 secs
m30000| Thu Jun 14 01:29:52 [conn17] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:29:52 [conn17] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:52 [conn17] insert foo.bar keyUpdates:0 locks(micros) W:9 w:1740291 1182ms
----
Done!
----
m30001| Thu Jun 14 01:29:52 [conn5] CMD: drop foo.bar
m30001| Thu Jun 14 01:29:52 [conn5] wiping data for: foo.bar
m30000| Thu Jun 14 01:29:52 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:29:52 [conn3] end connection 127.0.0.1:59130 (10 connections now open)
m30001| Thu Jun 14 01:29:52 [conn5] end connection 127.0.0.1:59135 (9 connections now open)
m30000| Thu Jun 14 01:29:52 [conn3] end connection 127.0.0.1:39218 (19 connections now open)
m30000| Thu Jun 14 01:29:52 [conn5] end connection 127.0.0.1:39223 (18 connections now open)
m30000| Thu Jun 14 01:29:52 [conn6] end connection 127.0.0.1:39224 (17 connections now open)
m30000| Thu Jun 14 01:29:52 [conn14] end connection 127.0.0.1:39239 (16 connections now open)
m30002| Thu Jun 14 01:29:52 [conn5] end connection 127.0.0.1:46765 (9 connections now open)
m30999| Thu Jun 14 01:29:52 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:29:52 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:52-4", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651792768), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:29:52 [conn] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:52 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651787:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:52 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd976d0b766b48292746860" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976cdb766b4829274685c" } }
m30999| Thu Jun 14 01:29:52 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' acquired, ts : 4fd976d0b766b48292746860
m30999| Thu Jun 14 01:29:52 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:29:52 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:29:52 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:29:52 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:29:52 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976cbb766b48292746857'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8c1c408
m30999| Thu Jun 14 01:29:52 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:29:52 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:29:52-5", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651792770), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:29:52 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651787:1804289383' unlocked.
m30999| Thu Jun 14 01:29:52 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30002| Thu Jun 14 01:29:52 [conn3] end connection 127.0.0.1:46760 (8 connections now open)
m30000| Thu Jun 14 01:29:53 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.611 secs
Thu Jun 14 01:29:53 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:29:53 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:29:53 [conn7] end connection 127.0.0.1:39226 (15 connections now open)
m30000| Thu Jun 14 01:29:53 [conn9] end connection 127.0.0.1:39229 (15 connections now open)
m30000| Thu Jun 14 01:29:53 [conn19] end connection 127.0.0.1:39255 (13 connections now open)
m30002| Thu Jun 14 01:29:53 [conn8] end connection 127.0.0.1:46776 (7 connections now open)
m30001| Thu Jun 14 01:29:53 [conn9] end connection 127.0.0.1:59146 (8 connections now open)
Thu Jun 14 01:29:54 shell: stopped mongo program on port 30998
m30997| Thu Jun 14 01:29:54 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:29:54 [conn6] end connection 127.0.0.1:59139 (7 connections now open)
m30002| Thu Jun 14 01:29:54 [conn6] end connection 127.0.0.1:46769 (6 connections now open)
m30001| Thu Jun 14 01:29:54 [conn7] end connection 127.0.0.1:59141 (6 connections now open)
m30001| Thu Jun 14 01:29:54 [conn11] end connection 127.0.0.1:59151 (5 connections now open)
m30002| Thu Jun 14 01:29:54 [conn10] end connection 127.0.0.1:46781 (5 connections now open)
m30000| Thu Jun 14 01:29:54 [conn10] end connection 127.0.0.1:39232 (12 connections now open)
m30000| Thu Jun 14 01:29:54 [conn17] end connection 127.0.0.1:39248 (12 connections now open)
m30000| Thu Jun 14 01:29:54 [conn11] end connection 127.0.0.1:39233 (10 connections now open)
m30000| Thu Jun 14 01:29:54 [conn20] end connection 127.0.0.1:39260 (9 connections now open)
m30000| Thu Jun 14 01:29:54 [conn13] end connection 127.0.0.1:39235 (8 connections now open)
Thu Jun 14 01:29:55 shell: stopped mongo program on port 30997
m30000| Thu Jun 14 01:29:55 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:29:55 [interruptThread] now exiting
m30000| Thu Jun 14 01:29:55 dbexit:
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:29:55 [interruptThread] closing listening socket: 9
m30000| Thu Jun 14 01:29:55 [interruptThread] closing listening socket: 10
m30000| Thu Jun 14 01:29:55 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:29:55 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:29:55 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:29:55 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:29:55 dbexit: really exiting now
Thu Jun 14 01:29:56 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:29:56 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:29:56 [interruptThread] now exiting
m30001| Thu Jun 14 01:29:56 dbexit:
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:29:56 [interruptThread] closing listening socket: 12
m30001| Thu Jun 14 01:29:56 [interruptThread] closing listening socket: 13
m30001| Thu Jun 14 01:29:56 [interruptThread] closing listening socket: 14
m30001| Thu Jun 14 01:29:56 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:29:56 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:29:56 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:29:56 dbexit: really exiting now
Thu Jun 14 01:29:57 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:29:57 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:29:57 [interruptThread] now exiting
m30002| Thu Jun 14 01:29:57 dbexit:
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:29:57 [interruptThread] closing listening socket: 16
m30002| Thu Jun 14 01:29:57 [interruptThread] closing listening socket: 17
m30002| Thu Jun 14 01:29:57 [interruptThread] closing listening socket: 18
m30002| Thu Jun 14 01:29:57 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:29:57 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:29:57 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:29:57 dbexit: really exiting now
Thu Jun 14 01:29:58 shell: stopped mongo program on port 30002
*** ShardingTest test completed successfully in 12.244 seconds ***
12315.662861ms
Thu Jun 14 01:29:58 [initandlisten] connection accepted from 127.0.0.1:59266 #15 (2 connections now open)
*******************************************
Test : coll_epoch_test2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/coll_epoch_test2.js";TestData.testFile = "coll_epoch_test2.js";TestData.testName = "coll_epoch_test2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:29:58 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:29:58 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:29:58
m30000| Thu Jun 14 01:29:58 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:29:58
m30000| Thu Jun 14 01:29:58 [initandlisten] MongoDB starting : pid=23250 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:29:58 [initandlisten]
m30000| Thu Jun 14 01:29:58 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:29:58 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:29:58 [initandlisten]
m30000| Thu Jun 14 01:29:58 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:29:58 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:29:58 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:29:58 [initandlisten]
m30000| Thu Jun 14 01:29:58 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:29:58 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:29:58 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:29:58 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:29:58 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:29:58 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:29:59 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39267 #1 (1 connection now open)
m30001| Thu Jun 14 01:29:59
m30001| Thu Jun 14 01:29:59 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:29:59
m30001| Thu Jun 14 01:29:59 [initandlisten] MongoDB starting : pid=23262 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:29:59 [initandlisten]
m30001| Thu Jun 14 01:29:59 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:29:59 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:29:59 [initandlisten]
m30001| Thu Jun 14 01:29:59 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:29:59 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:29:59 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:29:59 [initandlisten]
m30001| Thu Jun 14 01:29:59 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:29:59 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:29:59 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:29:59 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:29:59 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:29:59 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:59159 #1 (1 connection now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:29:59 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39270 #2 (2 connections now open)
m30999| Thu Jun 14 01:29:59 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:29:59 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23277 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30000| Thu Jun 14 01:29:59 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:29:59 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Thu Jun 14 01:29:59 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:29:59 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:29:59 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:29:59 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:29:59 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39272 #3 (3 connections now open)
m30999| Thu Jun 14 01:29:59 [mongosMain] connected connection!
m30000| Thu Jun 14 01:29:59 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.246 secs
m30000| Thu Jun 14 01:29:59 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:29:59 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.272 secs
m30000| Thu Jun 14 01:29:59 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:29:59 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn2] insert config.settings keyUpdates:0 locks(micros) w:538370 538ms
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:29:59 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:59 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:59 [mongosMain] connected connection!
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39275 #4 (4 connections now open)
m30000| Thu Jun 14 01:29:59 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:59 [mongosMain] MaxChunkSize: 50
m30000| Thu Jun 14 01:29:59 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:29:59 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:29:59 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:29:59 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:59 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:29:59 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:29:59 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:29:59 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:29:59 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:29:59 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:29:59 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:29:59
m30999| Thu Jun 14 01:29:59 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:29:59 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:29:59 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39276 #5 (5 connections now open)
m30999| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:29:59 [Balancer] connected connection!
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39277 #6 (6 connections now open)
m30999| Thu Jun 14 01:29:59 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:29:59 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:29:59 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:29:59 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651799:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:29:59 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd976d72cd504d18aa720dc" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30000| Thu Jun 14 01:29:59 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:29:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' acquired, ts : 4fd976d72cd504d18aa720dc
m30999| Thu Jun 14 01:29:59 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:29:59 [Balancer] no collections to balance
m30999| Thu Jun 14 01:29:59 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:29:59 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:29:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' unlocked.
m30999| Thu Jun 14 01:29:59 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651799:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:29:59 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:29:59 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651799:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:29:59 [conn6] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:29:59 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:29:59 [conn6] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:29:59 [conn6] build index done. scanned 1 total records. 0 secs
Thu Jun 14 01:29:59 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:29:59 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:29:59 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23299 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:29:59 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:29:59 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:29:59 [mongosMain] options: { configdb: "localhost:30000", port: 30998, verbose: true }
m30998| Thu Jun 14 01:29:59 [mongosMain] config string : localhost:30000
m30998| Thu Jun 14 01:29:59 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:29:59 [mongosMain] connection accepted from 127.0.0.1:51365 #1 (1 connection now open)
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39280 #7 (7 connections now open)
m30998| Thu Jun 14 01:29:59 [mongosMain] connected connection!
m30998| Thu Jun 14 01:29:59 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:29:59 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:29:59 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:29:59 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:29:59 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:29:59 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:29:59 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39281 #8 (8 connections now open)
m30998| Thu Jun 14 01:29:59 [Balancer] connected connection!
m30998| Thu Jun 14 01:29:59 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:29:59 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:29:59
m30998| Thu Jun 14 01:29:59 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:29:59 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:29:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:29:59 [initandlisten] connection accepted from 127.0.0.1:39282 #9 (9 connections now open)
m30998| Thu Jun 14 01:29:59 [Balancer] connected connection!
m30998| Thu Jun 14 01:29:59 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:29:59 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651799:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651799:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651799:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:29:59 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd976d70e722db72412568c" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd976d72cd504d18aa720dc" } }
m30998| Thu Jun 14 01:29:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651799:1804289383' acquired, ts : 4fd976d70e722db72412568c
m30998| Thu Jun 14 01:29:59 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:29:59 [Balancer] no collections to balance
m30998| Thu Jun 14 01:29:59 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:29:59 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:29:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651799:1804289383' unlocked.
m30998| Thu Jun 14 01:29:59 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339651799:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:29:59 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:29:59 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30998:1339651799:1804289383', sleeping for 30000ms
Thu Jun 14 01:30:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30997 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:30:00 [mongosMain] connection accepted from 127.0.0.1:35702 #1 (1 connection now open)
m30997| Thu Jun 14 01:30:00 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30997| Thu Jun 14 01:30:00 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23315 port=30997 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30997| Thu Jun 14 01:30:00 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30997| Thu Jun 14 01:30:00 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30997| Thu Jun 14 01:30:00 [mongosMain] options: { configdb: "localhost:30000", port: 30997, verbose: true }
m30997| Thu Jun 14 01:30:00 [mongosMain] config string : localhost:30000
m30997| Thu Jun 14 01:30:00 [mongosMain] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39285 #10 (10 connections now open)
m30997| Thu Jun 14 01:30:00 [mongosMain] connected connection!
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: CheckConfigServers
m30997| Thu Jun 14 01:30:00 [mongosMain] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39286 #11 (11 connections now open)
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:30:00 [mongosMain] connected connection!
m30997| Thu Jun 14 01:30:00 [mongosMain] MaxChunkSize: 50
m30997| Thu Jun 14 01:30:00 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30997| Thu Jun 14 01:30:00 [websvr] admin web console waiting for connections on port 31997
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: Balancer
m30997| Thu Jun 14 01:30:00 [Balancer] about to contact config servers and shards
m30997| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: cursorTimeout
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39287 #12 (12 connections now open)
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: PeriodicTask::Runner
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30997| Thu Jun 14 01:30:00 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30997| Thu Jun 14 01:30:00 [mongosMain] waiting for connections on port 30997
m30997| Thu Jun 14 01:30:00 [Balancer] config servers and shards contacted successfully
m30997| Thu Jun 14 01:30:00 [Balancer] balancer id: domU-12-31-39-01-70-B4:30997 started at Jun 14 01:30:00
m30997| Thu Jun 14 01:30:00 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30997| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39288 #13 (13 connections now open)
m30997| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30997| Thu Jun 14 01:30:00 [Balancer] Refreshing MaxChunkSize: 50
m30997| Thu Jun 14 01:30:00 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651800:1804289383:
m30997| { "state" : 1,
m30997| "who" : "domU-12-31-39-01-70-B4:30997:1339651800:1804289383:Balancer:846930886",
m30997| "process" : "domU-12-31-39-01-70-B4:30997:1339651800:1804289383",
m30997| "when" : { "$date" : "Thu Jun 14 01:30:00 2012" },
m30997| "why" : "doing balance round",
m30997| "ts" : { "$oid" : "4fd976d88f2daed539d05b7b" } }
m30997| { "_id" : "balancer",
m30997| "state" : 0,
m30997| "ts" : { "$oid" : "4fd976d70e722db72412568c" } }
m30997| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651800:1804289383' acquired, ts : 4fd976d88f2daed539d05b7b
m30997| Thu Jun 14 01:30:00 [Balancer] *** start balancing round
m30997| Thu Jun 14 01:30:00 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30997:1339651800:1804289383 (sleeping for 30000ms)
m30997| Thu Jun 14 01:30:00 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:00 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30997:1339651800:1804289383', sleeping for 30000ms
m30997| Thu Jun 14 01:30:00 [Balancer] no collections to balance
m30997| Thu Jun 14 01:30:00 [Balancer] no need to move any chunk
m30997| Thu Jun 14 01:30:00 [Balancer] *** end of balancing round
m30997| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339651800:1804289383' unlocked.
m30997| Thu Jun 14 01:30:00 [mongosMain] connection accepted from 127.0.0.1:52126 #1 (1 connection now open)
Thu Jun 14 01:30:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30996 --configdb localhost:30000 -v
m30996| Thu Jun 14 01:30:00 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30996| Thu Jun 14 01:30:00 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23335 port=30996 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30996| Thu Jun 14 01:30:00 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30996| Thu Jun 14 01:30:00 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30996| Thu Jun 14 01:30:00 [mongosMain] options: { configdb: "localhost:30000", port: 30996, verbose: true }
m30996| Thu Jun 14 01:30:00 [mongosMain] config string : localhost:30000
m30996| Thu Jun 14 01:30:00 [mongosMain] creating new connection to:localhost:30000
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39291 #14 (14 connections now open)
m30996| Thu Jun 14 01:30:00 [mongosMain] connected connection!
m30996| Thu Jun 14 01:30:00 [mongosMain] MaxChunkSize: 50
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: CheckConfigServers
m30996| Thu Jun 14 01:30:00 [CheckConfigServers] creating new connection to:localhost:30000
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39292 #15 (15 connections now open)
m30996| Thu Jun 14 01:30:00 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30996| Thu Jun 14 01:30:00 [mongosMain] waiting for connections on port 30996
m30996| Thu Jun 14 01:30:00 [CheckConfigServers] connected connection!
m30996| Thu Jun 14 01:30:00 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30996| Thu Jun 14 01:30:00 [websvr] admin web console waiting for connections on port 31996
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: Balancer
m30996| Thu Jun 14 01:30:00 [Balancer] about to contact config servers and shards
m30996| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: cursorTimeout
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: PeriodicTask::Runner
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39293 #16 (16 connections now open)
m30996| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30996| Thu Jun 14 01:30:00 [Balancer] config servers and shards contacted successfully
m30996| Thu Jun 14 01:30:00 [Balancer] balancer id: domU-12-31-39-01-70-B4:30996 started at Jun 14 01:30:00
m30996| Thu Jun 14 01:30:00 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30996| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30996| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39294 #17 (17 connections now open)
m30996| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30996| Thu Jun 14 01:30:00 [Balancer] Refreshing MaxChunkSize: 50
m30996| Thu Jun 14 01:30:00 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30996:1339651800:1804289383 (sleeping for 30000ms)
m30996| Thu Jun 14 01:30:00 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:00 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30996:1339651800:1804289383', sleeping for 30000ms
m30996| Thu Jun 14 01:30:00 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30996:1339651800:1804289383:
m30996| { "state" : 1,
m30996| "who" : "domU-12-31-39-01-70-B4:30996:1339651800:1804289383:Balancer:846930886",
m30996| "process" : "domU-12-31-39-01-70-B4:30996:1339651800:1804289383",
m30996| "when" : { "$date" : "Thu Jun 14 01:30:00 2012" },
m30996| "why" : "doing balance round",
m30996| "ts" : { "$oid" : "4fd976d8e79a86a896f73fb6" } }
m30996| { "_id" : "balancer",
m30996| "state" : 0,
m30996| "ts" : { "$oid" : "4fd976d88f2daed539d05b7b" } }
m30996| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30996:1339651800:1804289383' acquired, ts : 4fd976d8e79a86a896f73fb6
m30996| Thu Jun 14 01:30:00 [Balancer] *** start balancing round
m30996| Thu Jun 14 01:30:00 [Balancer] no collections to balance
m30996| Thu Jun 14 01:30:00 [Balancer] no need to move any chunk
m30996| Thu Jun 14 01:30:00 [Balancer] *** end of balancing round
m30996| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30996:1339651800:1804289383' unlocked.
m30000| Thu Jun 14 01:30:00 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.58 secs
m30996| Thu Jun 14 01:30:00 [mongosMain] connection accepted from 127.0.0.1:58923 #1 (1 connection now open)
Thu Jun 14 01:30:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30995 --configdb localhost:30000 -v
m30995| Thu Jun 14 01:30:00 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30995| Thu Jun 14 01:30:00 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23354 port=30995 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30995| Thu Jun 14 01:30:00 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30995| Thu Jun 14 01:30:00 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30995| Thu Jun 14 01:30:00 [mongosMain] options: { configdb: "localhost:30000", port: 30995, verbose: true }
m30995| Thu Jun 14 01:30:00 [mongosMain] config string : localhost:30000
m30995| Thu Jun 14 01:30:00 [mongosMain] creating new connection to:localhost:30000
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39297 #18 (18 connections now open)
m30995| Thu Jun 14 01:30:00 [mongosMain] connected connection!
m30995| Thu Jun 14 01:30:00 [mongosMain] MaxChunkSize: 50
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: CheckConfigServers
m30995| Thu Jun 14 01:30:00 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30995| Thu Jun 14 01:30:00 [websvr] admin web console waiting for connections on port 31995
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: Balancer
m30995| Thu Jun 14 01:30:00 [Balancer] about to contact config servers and shards
m30995| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39298 #19 (19 connections now open)
m30995| Thu Jun 14 01:30:00 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30995| Thu Jun 14 01:30:00 [mongosMain] waiting for connections on port 30995
m30995| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30995| Thu Jun 14 01:30:00 [Balancer] config servers and shards contacted successfully
m30995| Thu Jun 14 01:30:00 [Balancer] balancer id: domU-12-31-39-01-70-B4:30995 started at Jun 14 01:30:00
m30995| Thu Jun 14 01:30:00 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30995| Thu Jun 14 01:30:00 [Balancer] creating new connection to:localhost:30000
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: cursorTimeout
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: PeriodicTask::Runner
m30995| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39299 #20 (20 connections now open)
m30995| Thu Jun 14 01:30:00 [Balancer] connected connection!
m30995| Thu Jun 14 01:30:00 [Balancer] Refreshing MaxChunkSize: 50
m30995| Thu Jun 14 01:30:00 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30995:1339651800:1804289383:
m30995| { "state" : 1,
m30995| "who" : "domU-12-31-39-01-70-B4:30995:1339651800:1804289383:Balancer:846930886",
m30995| "process" : "domU-12-31-39-01-70-B4:30995:1339651800:1804289383",
m30995| "when" : { "$date" : "Thu Jun 14 01:30:00 2012" },
m30995| "why" : "doing balance round",
m30995| "ts" : { "$oid" : "4fd976d8db8336423a87f5c6" } }
m30995| { "_id" : "balancer",
m30995| "state" : 0,
m30995| "ts" : { "$oid" : "4fd976d8e79a86a896f73fb6" } }
m30995| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30995:1339651800:1804289383' acquired, ts : 4fd976d8db8336423a87f5c6
m30995| Thu Jun 14 01:30:00 [Balancer] *** start balancing round
m30995| Thu Jun 14 01:30:00 [Balancer] no collections to balance
m30995| Thu Jun 14 01:30:00 [Balancer] no need to move any chunk
m30995| Thu Jun 14 01:30:00 [Balancer] *** end of balancing round
m30995| Thu Jun 14 01:30:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30995:1339651800:1804289383' unlocked.
m30995| Thu Jun 14 01:30:00 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30995:1339651800:1804289383 (sleeping for 30000ms)
m30995| Thu Jun 14 01:30:00 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:00 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30995:1339651800:1804289383', sleeping for 30000ms
ShardingTest undefined going to add shard : localhost:30000
m30995| Thu Jun 14 01:30:00 [mongosMain] connection accepted from 127.0.0.1:49828 #1 (1 connection now open)
m30999| Thu Jun 14 01:30:00 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:30:00 [conn6] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:30:00 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:30:00 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:30:00 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:30:00 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:00 [conn] connected connection!
m30001| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:59191 #2 (2 connections now open)
m30999| Thu Jun 14 01:30:00 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:30:00 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39302 #21 (21 connections now open)
m30999| Thu Jun 14 01:30:00 [conn] connected connection!
m30999| Thu Jun 14 01:30:00 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976d72cd504d18aa720db
m30999| Thu Jun 14 01:30:00 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:30:00 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:00 [conn] connected connection!
m30999| Thu Jun 14 01:30:00 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976d72cd504d18aa720db
m30999| Thu Jun 14 01:30:00 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:59193 #3 (3 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30996| Thu Jun 14 01:30:00 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30000| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:39304 #22 (22 connections now open)
m30001| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:59195 #4 (4 connections now open)
----
Enabling sharding for the first time...
----
m30999| Thu Jun 14 01:30:00 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:30:00 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:00 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:00 [conn] connected connection!
m30001| Thu Jun 14 01:30:00 [initandlisten] connection accepted from 127.0.0.1:59196 #5 (5 connections now open)
m30999| Thu Jun 14 01:30:00 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:30:00 [conn] put [foo] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:30:00 [conn] enabling sharding on: foo
m30999| Thu Jun 14 01:30:00 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:30:00 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:30:00 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976d82cd504d18aa720dd
m30999| Thu Jun 14 01:30:00 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd976d82cd504d18aa720dd based on: (empty)
m30001| Thu Jun 14 01:30:00 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:30:00 [FileAllocator] creating directory /data/db/test1/_tmp
m30000| Thu Jun 14 01:30:00 [conn6] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:30:00 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:30:00 [conn] resetting shard version of foo.bar on localhost:30000, version is zero
m30999| Thu Jun 14 01:30:00 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0000", shardHost: "localhost:30000" } 0xa61ec40
m30999| Thu Jun 14 01:30:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:00 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30001| Thu Jun 14 01:30:01 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.262 secs
m30001| Thu Jun 14 01:30:01 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:30:01 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.249 secs
m30001| Thu Jun 14 01:30:01 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:30:01 [conn5] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:30:01 [conn5] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:30:01 [conn5] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:30:01 [conn5] insert foo.system.indexes keyUpdates:0 locks(micros) W:80 r:254 w:530997 530ms
m30001| Thu Jun 14 01:30:01 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:98 reslen:171 528ms
m30999| Thu Jun 14 01:30:01 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:30:01 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30001| Thu Jun 14 01:30:01 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:30:01 [initandlisten] connection accepted from 127.0.0.1:39307 #23 (23 connections now open)
m30999| Thu Jun 14 01:30:01 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:01 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 7396472 splitThreshold: 921
m30999| Thu Jun 14 01:30:01 [conn] chunk not full enough to trigger auto-split no split entry
----
Sharding collection across multiple shards...
----
m30999| Thu Jun 14 01:30:01 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:30:01 [initandlisten] connection accepted from 127.0.0.1:39308 #24 (24 connections now open)
m30000| Thu Jun 14 01:30:01 [initandlisten] connection accepted from 127.0.0.1:39309 #25 (25 connections now open)
m30001| Thu Jun 14 01:30:01 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:01 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:01 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' acquired, ts : 4fd976d9c85617b2d9ff6f96
m30001| Thu Jun 14 01:30:01 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651801:2027961303 (sleeping for 30000ms)
m30001| Thu Jun 14 01:30:01 [conn5] splitChunk accepted at version 1|0||4fd976d82cd504d18aa720dd
m30001| Thu Jun 14 01:30:01 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:01-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651801356), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd976d82cd504d18aa720dd') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd976d82cd504d18aa720dd') } } }
m30001| Thu Jun 14 01:30:01 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' unlocked.
m30999| Thu Jun 14 01:30:01 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd976d82cd504d18aa720dd based on: 1|0||4fd976d82cd504d18aa720dd
{ "ok" : 1 }
m30999| Thu Jun 14 01:30:01 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:30:01 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:30:01 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:01 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:01 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' acquired, ts : 4fd976d9c85617b2d9ff6f97
m30001| Thu Jun 14 01:30:01 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:01-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651801360), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:01 [conn5] moveChunk request accepted at version 1|2||4fd976d82cd504d18aa720dd
m30001| Thu Jun 14 01:30:01 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:30:01 [initandlisten] connection accepted from 127.0.0.1:59200 #6 (6 connections now open)
m30000| Thu Jun 14 01:30:01 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:30:02 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:30:02 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 1.184 secs
m30001| Thu Jun 14 01:30:02 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 1.202 secs
m30000| Thu Jun 14 01:30:02 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:30:03 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:30:03 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.945 secs
m30000| Thu Jun 14 01:30:03 [migrateThread] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:30:03 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:03 [migrateThread] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:30:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:30:03 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:30:04 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.605 secs
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk setting version to: 2|0||4fd976d82cd504d18aa720dd
m30000| Thu Jun 14 01:30:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 0.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:30:04 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651804373), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 5: 2145, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 867 } }
m30000| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:39311 #26 (26 connections now open)
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk updating self version to: 2|1||4fd976d82cd504d18aa720dd through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30001| Thu Jun 14 01:30:04 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651804378), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:04 [conn5] doing delete inline
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk deleted: 1
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' unlocked.
m30001| Thu Jun 14 01:30:04 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651804379), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 3009, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:30:04 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:80 r:352 w:531255 reslen:37 3019ms
{ "millis" : 3021, "ok" : 1 }
m30999| Thu Jun 14 01:30:04 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 2|1||4fd976d82cd504d18aa720dd based on: 1|2||4fd976d82cd504d18aa720dd
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }
foo.bar chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0001 Timestamp(2000, 1)
{ "_id" : 0 } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
----
Loading this status in all mongoses...
----
{ "flushed" : true, "ok" : 1 }
m30999| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 2|1||4fd976d82cd504d18aa720dd based on: (empty)
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0000", shardHost: "localhost:30000" } 0xa61ec40
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa61ec40
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ok: 1.0 }
m30000| Thu Jun 14 01:30:04 [conn21] no current chunk manager found for this shard, will initialize
{ "flushed" : true, "ok" : 1 }
m30998| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30998| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30998| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 2|1||4fd976d82cd504d18aa720dd based on: (empty)
m30998| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:39312 #27 (27 connections now open)
m30998| Thu Jun 14 01:30:04 [conn] connected connection!
m30998| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976d70e722db72412568b
m30998| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d70e722db72412568b'), shard: "shard0000", shardHost: "localhost:30000" } 0x886bbe8
m30998| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59203 #7 (7 connections now open)
m30998| Thu Jun 14 01:30:04 [conn] connected connection!
m30998| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976d70e722db72412568b
m30998| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d70e722db72412568b'), shard: "shard0001", shardHost: "localhost:30001" } 0x886cf68
m30998| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30001
m30998| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30998| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] connected connection!
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59204 #8 (8 connections now open)
{ "flushed" : true, "ok" : 1 }
m30997| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30997| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 2|1||4fd976d82cd504d18aa720dd based on: (empty)
m30997| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:39315 #28 (28 connections now open)
m30997| Thu Jun 14 01:30:04 [conn] connected connection!
m30997| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976d88f2daed539d05b7a
m30997| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30000
m30997| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d88f2daed539d05b7a'), shard: "shard0000", shardHost: "localhost:30000" } 0x9c563f0
m30997| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30000
m30997| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:30:04 [conn] connected connection!
m30997| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976d88f2daed539d05b7a
m30997| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30001
m30997| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59206 #9 (9 connections now open)
m30997| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d88f2daed539d05b7a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9c56be8
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59207 #10 (10 connections now open)
m30997| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] connected connection!
m30997| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
{ "flushed" : true, "ok" : 1 }
m30996| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30996| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 2|1||4fd976d82cd504d18aa720dd based on: (empty)
m30996| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30000
m30996| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:39318 #29 (29 connections now open)
m30996| Thu Jun 14 01:30:04 [conn] connected connection!
m30996| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976d8e79a86a896f73fb5
m30996| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30000
m30996| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d8e79a86a896f73fb5'), shard: "shard0000", shardHost: "localhost:30000" } 0x89b0670
m30996| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30000
m30996| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30996| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30001
m30996| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59209 #11 (11 connections now open)
m30996| Thu Jun 14 01:30:04 [conn] connected connection!
m30996| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976d8e79a86a896f73fb5
m30996| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30001
m30996| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d8e79a86a896f73fb5'), shard: "shard0001", shardHost: "localhost:30001" } 0x89b2930
m30996| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30001
m30996| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30996| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30996| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59210 #12 (12 connections now open)
m30996| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] connected connection!
m30995| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
{ "flushed" : true, "ok" : 1 }
m30995| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30995| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 2|1||4fd976d82cd504d18aa720dd based on: (empty)
m30995| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30000
m30995| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:39321 #30 (30 connections now open)
m30995| Thu Jun 14 01:30:04 [conn] connected connection!
m30995| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd976d8db8336423a87f5c5
m30995| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30000
m30995| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30000
m30995| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d8db8336423a87f5c5'), shard: "shard0000", shardHost: "localhost:30000" } 0x9d34580
m30995| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30995| Thu Jun 14 01:30:04 [conn] creating new connection to:localhost:30001
m30995| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59212 #13 (13 connections now open)
m30995| Thu Jun 14 01:30:04 [conn] connected connection!
m30995| Thu Jun 14 01:30:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd976d8db8336423a87f5c5
m30995| Thu Jun 14 01:30:04 [conn] initializing shard connection to localhost:30001
m30995| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), serverID: ObjectId('4fd976d8db8336423a87f5c5'), shard: "shard0001", shardHost: "localhost:30001" } 0x9d33900
m30995| Thu Jun 14 01:30:04 BackgroundJob starting: WriteBackListener-localhost:30001
m30995| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30995| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
----
Rebuilding sharded collection with different split...
----
m30999| Thu Jun 14 01:30:04 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:30:04 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-0", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651804424), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:30:04 [conn] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:04 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383:conn:2044897763",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:04 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd976dc2cd504d18aa720de" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976d9c85617b2d9ff6f97" } }
m30999| Thu Jun 14 01:30:04 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' acquired, ts : 4fd976dc2cd504d18aa720de
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa61c5c0
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa623148
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:30:04 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-1", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651804428), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:30:04 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' unlocked.
m30000| Thu Jun 14 01:30:04 [conn5] CMD: drop foo.bar
m30000| Thu Jun 14 01:30:04 [conn5] wiping data for: foo.bar
m30001| Thu Jun 14 01:30:04 [conn5] CMD: drop foo.bar
m30001| Thu Jun 14 01:30:04 [conn5] wiping data for: foo.bar
m30995| Thu Jun 14 01:30:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:04 [initandlisten] connection accepted from 127.0.0.1:59213 #14 (14 connections now open)
m30995| Thu Jun 14 01:30:04 [WriteBackListener-localhost:30001] connected connection!
m30999| Thu Jun 14 01:30:04 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:30:04 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:30:04 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:30:04 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd976dc2cd504d18aa720df
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 1|0||4fd976dc2cd504d18aa720df based on: (empty)
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:30:04 [conn5] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:30:04 [conn5] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:30:04 [conn5] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:30:04 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 9462484 splitThreshold: 921
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30001| Thu Jun 14 01:30:04 [conn5] request split points lookup for chunk foo.bar { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:30:04 [conn5] max number of requested split points reached (2) before the end of chunk foo.bar { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:30:04 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:04 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' acquired, ts : 4fd976dcc85617b2d9ff6f98
m30001| Thu Jun 14 01:30:04 [conn5] splitChunk accepted at version 1|0||4fd976dc2cd504d18aa720df
m30001| Thu Jun 14 01:30:04 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651804440), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd976dc2cd504d18aa720df') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd976dc2cd504d18aa720df') } } }
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 1|2||4fd976dc2cd504d18aa720df based on: 1|0||4fd976dc2cd504d18aa720df
m30999| Thu Jun 14 01:30:04 [conn] autosplitted foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30999| Thu Jun 14 01:30:04 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), ok: 1.0 }
m30999| Thu Jun 14 01:30:04 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 3547949 splitThreshold: 471859
m30999| Thu Jun 14 01:30:04 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' unlocked.
m30999| Thu Jun 14 01:30:04 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:30:04 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 200.0 } ], shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:04 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' acquired, ts : 4fd976dcc85617b2d9ff6f99
m30001| Thu Jun 14 01:30:04 [conn5] splitChunk accepted at version 1|2||4fd976dc2cd504d18aa720df
m30001| Thu Jun 14 01:30:04 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651804447), what: "split", ns: "foo.bar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 200.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd976dc2cd504d18aa720df') }, right: { min: { _id: 200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd976dc2cd504d18aa720df') } } }
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' unlocked.
m30999| Thu Jun 14 01:30:04 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 1|4||4fd976dc2cd504d18aa720df based on: 1|2||4fd976dc2cd504d18aa720df
{ "ok" : 1 }
m30999| Thu Jun 14 01:30:04 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 200.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:30:04 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 200.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:30:04 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 200.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_200.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:04 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:04 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' acquired, ts : 4fd976dcc85617b2d9ff6f9a
m30001| Thu Jun 14 01:30:04 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:04-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651804451), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 200.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk request accepted at version 1|4||4fd976dc2cd504d18aa720df
m30001| Thu Jun 14 01:30:04 [conn5] moveChunk number of documents: 0
m30000| Thu Jun 14 01:30:04 [migrateThread] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:30:04 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:04 [migrateThread] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:30:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 200.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:30:05 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: 200.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:30:05 [conn5] moveChunk setting version to: 2|0||4fd976dc2cd504d18aa720df
m30000| Thu Jun 14 01:30:05 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 200.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:30:05 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:05-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651805461), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 200.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:30:05 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: 200.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:30:05 [conn5] moveChunk updating self version to: 2|1||4fd976dc2cd504d18aa720df through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30001| Thu Jun 14 01:30:05 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:05-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651805466), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 200.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:05 [conn5] doing delete inline
m30001| Thu Jun 14 01:30:05 [conn5] moveChunk deleted: 0
m30999| Thu Jun 14 01:30:05 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 9 version: 2|1||4fd976dc2cd504d18aa720df based on: 1|4||4fd976dc2cd504d18aa720df
m30001| Thu Jun 14 01:30:05 [conn5] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30001:1339651801:2027961303' unlocked.
m30001| Thu Jun 14 01:30:05 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:05-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59196", time: new Date(1339651805467), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 200.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:30:05 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 200.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_200.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:100 r:912 w:532343 reslen:37 1016ms
{ "millis" : 1017, "ok" : 1 }
----
Checking other mongoses for detection of change...
----
----
Checking find...
----
m30998| Thu Jun 14 01:30:05 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30998| Thu Jun 14 01:30:05 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 2|1||4fd976dc2cd504d18aa720df based on: (empty)
m30998| Thu Jun 14 01:30:05 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d70e722db72412568b'), shard: "shard0001", shardHost: "localhost:30001" } 0x886cf68
m30998| Thu Jun 14 01:30:05 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:30:05 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d70e722db72412568b'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x886cf68
m30998| Thu Jun 14 01:30:05 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ok: 1.0 }
m30998| Thu Jun 14 01:30:05 [conn] PCursor erasing empty state { state: {}, retryNext: false, init: false, finish: false, errored: false }
----
Checking update...
----
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976dd0000000000000000'), connectionId: 28, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), yourVersion: Timestamp 2000|0, yourVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:30:05 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 3971237 splitThreshold: 471859
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:28 writebackId: 4fd976dd0000000000000000 needVersion : 0|0||000000000000000000000000 mine : 2|1||4fd976d82cd504d18aa720dd
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] op: update len: 76 ns: foo.bar flags: 0 query: { _id: 1.0 } update: { $set: { updated: true } }
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 0|0||000000000000000000000000 but currently have version 2|1||4fd976d82cd504d18aa720dd
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30997| Thu Jun 14 01:30:05 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 2|1||4fd976dc2cd504d18aa720df based on: (empty)
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:39324 #31 (31 connections now open)
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d88f2daed539d05b7a'), shard: "shard0000", shardHost: "localhost:30000" } 0x9c553f0
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion failed!
m30997| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30000| Thu Jun 14 01:30:05 [conn31] no current chunk manager found for this shard, will initialize
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d88f2daed539d05b7a'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9c553f0
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:59215 #15 (15 connections now open)
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30001
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d88f2daed539d05b7a'), shard: "shard0001", shardHost: "localhost:30001" } 0x9c59268
m30997| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0000", shardHost: "localhost:30000" } 0xa61ec40
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa61ec40
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), ok: 1.0 }
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d72cd504d18aa720db'), shard: "shard0001", shardHost: "localhost:30001" } 0xa61e600
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), ok: 1.0 }
----
Checking insert...
----
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976dd0000000000000001'), connectionId: 29, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), yourVersion: Timestamp 2000|0, yourVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), msg: BinData }, ok: 1.0 }
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:29 writebackId: 4fd976dd0000000000000001 needVersion : 2|0||4fd976dc2cd504d18aa720df mine : 2|1||4fd976d82cd504d18aa720dd
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] op: insert len: 46 ns: foo.bar{ _id: 101.0 }
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 2|0||4fd976dc2cd504d18aa720df but currently have version 2|1||4fd976d82cd504d18aa720dd
m30996| Thu Jun 14 01:30:05 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 3971195 splitThreshold: 471859
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30996| Thu Jun 14 01:30:05 [conn] chunk not full enough to trigger auto-split no split entry
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 2|1||4fd976dc2cd504d18aa720df based on: (empty)
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:39326 #32 (32 connections now open)
m30996| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d8e79a86a896f73fb5'), shard: "shard0000", shardHost: "localhost:30000" } 0x89b3de8
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30001
m30996| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:59217 #16 (16 connections now open)
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30001
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d8e79a86a896f73fb5'), shard: "shard0001", shardHost: "localhost:30001" } 0x89b2b70
m30996| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
----
Checking remove...
----
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd976dd0000000000000002'), connectionId: 30, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), yourVersion: Timestamp 2000|0, yourVersionEpoch: ObjectId('4fd976d82cd504d18aa720dd'), msg: BinData }, ok: 1.0 }
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:30 writebackId: 4fd976dd0000000000000002 needVersion : 2|0||4fd976dc2cd504d18aa720df mine : 2|1||4fd976d82cd504d18aa720dd
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] op: remove len: 50 ns: foo.bar flags: 0 query: { _id: 2.0 }
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 2|0||4fd976dc2cd504d18aa720df but currently have version 2|1||4fd976d82cd504d18aa720dd
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 2|1||4fd976dc2cd504d18aa720df based on: (empty)
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30995| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:39328 #33 (33 connections now open)
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d8db8336423a87f5c5'), shard: "shard0000", shardHost: "localhost:30000" } 0x9d36cb0
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] creating new connection to:localhost:30001
m30995| Thu Jun 14 01:30:05 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:05 [initandlisten] connection accepted from 127.0.0.1:59219 #17 (17 connections now open)
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] connected connection!
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30001
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd976dc2cd504d18aa720df'), serverID: ObjectId('4fd976d8db8336423a87f5c5'), shard: "shard0001", shardHost: "localhost:30001" } 0x9d37458
m30995| Thu Jun 14 01:30:05 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:05 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:30:05 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:05-2", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651805514), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:30:05 [conn] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:05 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383:conn:2044897763",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651799:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:05 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd976dd2cd504d18aa720e0" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd976dcc85617b2d9ff6f9a" } }
m30999| Thu Jun 14 01:30:05 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' acquired, ts : 4fd976dd2cd504d18aa720e0
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa61c5c0
m30999| Thu Jun 14 01:30:05 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd976d72cd504d18aa720db'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa623148
m30999| Thu Jun 14 01:30:05 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:30:05 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:05-3", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339651805517), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:30:05 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339651799:1804289383' unlocked.
m30000| Thu Jun 14 01:30:05 [conn5] CMD: drop foo.bar
----
Done!
----
m30000| Thu Jun 14 01:30:05 [conn5] wiping data for: foo.bar
m30001| Thu Jun 14 01:30:05 [conn5] CMD: drop foo.bar
m30001| Thu Jun 14 01:30:05 [conn5] wiping data for: foo.bar
m30001| Thu Jun 14 01:30:05 [conn3] end connection 127.0.0.1:59193 (16 connections now open)
m30001| Thu Jun 14 01:30:05 [conn5] end connection 127.0.0.1:59196 (15 connections now open)
m30000| Thu Jun 14 01:30:05 [conn3] end connection 127.0.0.1:39272 (32 connections now open)
m30000| Thu Jun 14 01:30:05 [conn6] end connection 127.0.0.1:39277 (31 connections now open)
m30000| Thu Jun 14 01:30:05 [conn5] end connection 127.0.0.1:39276 (30 connections now open)
m30000| Thu Jun 14 01:30:05 [conn21] end connection 127.0.0.1:39302 (29 connections now open)
m30999| Thu Jun 14 01:30:05 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
Thu Jun 14 01:30:06 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:30:06 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:30:06 [conn7] end connection 127.0.0.1:59203 (14 connections now open)
m30000| Thu Jun 14 01:30:06 [conn7] end connection 127.0.0.1:39280 (28 connections now open)
m30000| Thu Jun 14 01:30:06 [conn9] end connection 127.0.0.1:39282 (27 connections now open)
m30000| Thu Jun 14 01:30:06 [conn27] end connection 127.0.0.1:39312 (26 connections now open)
Thu Jun 14 01:30:07 shell: stopped mongo program on port 30998
m30997| Thu Jun 14 01:30:07 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:30:07 [conn10] end connection 127.0.0.1:39285 (25 connections now open)
m30000| Thu Jun 14 01:30:07 [conn11] end connection 127.0.0.1:39286 (25 connections now open)
m30000| Thu Jun 14 01:30:07 [conn13] end connection 127.0.0.1:39288 (23 connections now open)
m30000| Thu Jun 14 01:30:07 [conn28] end connection 127.0.0.1:39315 (22 connections now open)
m30001| Thu Jun 14 01:30:07 [conn9] end connection 127.0.0.1:59206 (13 connections now open)
m30001| Thu Jun 14 01:30:07 [conn15] end connection 127.0.0.1:59215 (12 connections now open)
m30000| Thu Jun 14 01:30:07 [conn31] end connection 127.0.0.1:39324 (21 connections now open)
Thu Jun 14 01:30:08 shell: stopped mongo program on port 30997
m30996| Thu Jun 14 01:30:08 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:30:08 [conn11] end connection 127.0.0.1:59209 (11 connections now open)
m30001| Thu Jun 14 01:30:08 [conn16] end connection 127.0.0.1:59217 (10 connections now open)
m30000| Thu Jun 14 01:30:08 [conn29] end connection 127.0.0.1:39318 (20 connections now open)
m30000| Thu Jun 14 01:30:08 [conn15] end connection 127.0.0.1:39292 (20 connections now open)
m30000| Thu Jun 14 01:30:08 [conn14] end connection 127.0.0.1:39291 (18 connections now open)
m30000| Thu Jun 14 01:30:08 [conn32] end connection 127.0.0.1:39326 (17 connections now open)
m30000| Thu Jun 14 01:30:08 [conn17] end connection 127.0.0.1:39294 (16 connections now open)
Thu Jun 14 01:30:09 shell: stopped mongo program on port 30996
m30995| Thu Jun 14 01:30:09 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:30:09 [conn18] end connection 127.0.0.1:39297 (15 connections now open)
m30000| Thu Jun 14 01:30:09 [conn20] end connection 127.0.0.1:39299 (14 connections now open)
m30001| Thu Jun 14 01:30:09 [conn13] end connection 127.0.0.1:59212 (9 connections now open)
m30000| Thu Jun 14 01:30:09 [conn30] end connection 127.0.0.1:39321 (13 connections now open)
m30000| Thu Jun 14 01:30:09 [conn33] end connection 127.0.0.1:39328 (12 connections now open)
m30001| Thu Jun 14 01:30:09 [conn17] end connection 127.0.0.1:59219 (8 connections now open)
Thu Jun 14 01:30:10 shell: stopped mongo program on port 30995
m30000| Thu Jun 14 01:30:10 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:30:10 [interruptThread] now exiting
m30000| Thu Jun 14 01:30:10 dbexit:
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:30:10 [interruptThread] closing listening socket: 10
m30000| Thu Jun 14 01:30:10 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:30:10 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:30:10 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:30:10 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:30:10 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:30:10 dbexit: really exiting now
m30001| Thu Jun 14 01:30:10 [conn6] end connection 127.0.0.1:59200 (7 connections now open)
Thu Jun 14 01:30:11 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:30:11 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:30:11 [interruptThread] now exiting
m30001| Thu Jun 14 01:30:11 dbexit:
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:30:11 [interruptThread] closing listening socket: 13
m30001| Thu Jun 14 01:30:11 [interruptThread] closing listening socket: 14
m30001| Thu Jun 14 01:30:11 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:30:11 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:30:11 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:30:11 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:30:11 dbexit: really exiting now
Thu Jun 14 01:30:12 shell: stopped mongo program on port 30001
*** ShardingTest test completed successfully in 13.687 seconds ***
13741.966009ms
Thu Jun 14 01:30:12 [initandlisten] connection accepted from 127.0.0.1:59331 #16 (3 connections now open)
*******************************************
Test : complex_sharding.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/complex_sharding.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/complex_sharding.js";TestData.testFile = "complex_sharding.js";TestData.testName = "complex_sharding";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:30:12 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:30:12 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0 -vvvv
m30000| Thu Jun 14 01:30:12
m30000| Thu Jun 14 01:30:12 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:30:12
m30000| Thu Jun 14 01:30:12 versionCmpTest passed
m30000| Thu Jun 14 01:30:12 versionArrayTest passed
m30000| Thu Jun 14 01:30:12 isInRangeTest passed
m30000| Thu Jun 14 01:30:12 BackgroundJob starting: DataFileSync
m30000| Thu Jun 14 01:30:12 shardKeyTest passed
m30000| Thu Jun 14 01:30:12 shardObjTest passed
m30000| Thu Jun 14 01:30:12 [initandlisten] MongoDB starting : pid=23449 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:30:12 [initandlisten]
m30000| Thu Jun 14 01:30:12 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:30:12 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:30:12 [initandlisten]
m30000| Thu Jun 14 01:30:12 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:30:12 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:30:12 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:30:12 [initandlisten]
m30000| Thu Jun 14 01:30:12 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:30:12 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:30:12 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:30:12 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, vvvv: true }
m30000| Thu Jun 14 01:30:12 [initandlisten] flushing directory /data/db/test0
m30000| Thu Jun 14 01:30:12 [initandlisten] opening db: local
m30000| Thu Jun 14 01:30:12 [initandlisten] enter repairDatabases (to check pdfile version #)
m30000| Thu Jun 14 01:30:12 [initandlisten] done repairDatabases
m30000| Thu Jun 14 01:30:12 BackgroundJob starting: snapshot
m30000| Thu Jun 14 01:30:12 BackgroundJob starting: ClientCursorMonitor
m30000| Thu Jun 14 01:30:12 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m30000| Thu Jun 14 01:30:12 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:30:12 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:30:12 BackgroundJob starting: TTLMonitor
m30000| Thu Jun 14 01:30:12 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30000| Thu Jun 14 01:30:12 [websvr] admin web console waiting for connections on port 31000
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs1",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 1,
"node" : 0,
"set" : "test-rs1"
},
"verbose" : 5,
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs1-0'
m30000| Thu Jun 14 01:30:12 [initandlisten] connection accepted from 127.0.0.1:39332 #1 (1 connection now open)
Thu Jun 14 01:30:12 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-0 -vvvvv
m31200| note: noprealloc may hurt performance in many applications
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: DataFileSync
m31200| Thu Jun 14 01:30:12
m31200| Thu Jun 14 01:30:12 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Thu Jun 14 01:30:12
m31200| Thu Jun 14 01:30:12 versionCmpTest passed
m31200| Thu Jun 14 01:30:12 versionArrayTest passed
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcdef: "z23456789" }
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31200| Thu Jun 14 01:30:12 Matcher::matches() { abcdef: "z23456789" }
m31200| Thu Jun 14 01:30:12 isInRangeTest passed
m31200| Thu Jun 14 01:30:12 shardKeyTest passed
m31200| Thu Jun 14 01:30:12 shardObjTest passed
m31200| Thu Jun 14 01:30:12 [initandlisten] MongoDB starting : pid=23462 port=31200 dbpath=/data/db/test-rs1-0 32-bit host=domU-12-31-39-01-70-B4
m31200| Thu Jun 14 01:30:12 [initandlisten]
m31200| Thu Jun 14 01:30:12 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Thu Jun 14 01:30:12 [initandlisten] ** Not recommended for production.
m31200| Thu Jun 14 01:30:12 [initandlisten]
m31200| Thu Jun 14 01:30:12 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Thu Jun 14 01:30:12 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Thu Jun 14 01:30:12 [initandlisten] ** with --journal, the limit is lower
m31200| Thu Jun 14 01:30:12 [initandlisten]
m31200| Thu Jun 14 01:30:12 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Thu Jun 14 01:30:12 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Thu Jun 14 01:30:12 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31200| Thu Jun 14 01:30:12 [initandlisten] options: { dbpath: "/data/db/test-rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "test-rs1", rest: true, smallfiles: true, vvvvv: true }
m31200| Thu Jun 14 01:30:12 [initandlisten] flushing directory /data/db/test-rs1-0
m31200| Thu Jun 14 01:30:12 [initandlisten] enter repairDatabases (to check pdfile version #)
m31200| Thu Jun 14 01:30:12 [initandlisten] done repairDatabases
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: snapshot
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: ClientCursorMonitor
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: PeriodicTask::Runner
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: TTLMonitor
m31200| Thu Jun 14 01:30:12 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m31200| Thu Jun 14 01:30:12 [initandlisten] waiting for connections on port 31200
m31200| Thu Jun 14 01:30:12 [rsStart] replSet beginning startup...
m31200| Thu Jun 14 01:30:12 [rsStart] loadConfig() local.system.replset
m31200| Thu Jun 14 01:30:12 [rsStart] ReplSetConfig load :domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:30:12 [rsStart] getMyAddrs(): [127.0.0.1] [10.255.119.66] [::1] [fe80::1031:39ff:fe01:70b4%eth0]
m31200| Thu Jun 14 01:30:12 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m31200| Thu Jun 14 01:30:12 [websvr] admin web console waiting for connections on port 32200
m31200| Thu Jun 14 01:30:12 [rsStart] getallIPs("domU-12-31-39-01-70-B4"): [10.255.119.66]
m31200| Thu Jun 14 01:30:12 BackgroundJob starting: ConnectBG
m31200| Thu Jun 14 01:30:12 [initandlisten] connection accepted from 10.255.119.66:35427 #1 (1 connection now open)
m31200| Thu Jun 14 01:30:12 [conn1] runQuery called local.system.replset {}
m31200| Thu Jun 14 01:30:12 [conn1] opening db: local
m31200| Thu Jun 14 01:30:12 [conn1] query local.system.replset ntoreturn:1 keyUpdates:0 locks(micros) W:140 r:90 nreturned:0 reslen:20 0ms
m31200| Thu Jun 14 01:30:12 [conn1] runQuery called local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:12 [conn1] run command local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:12 [conn1] command local.$cmd command: { count: "system.replset", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:140 r:115 reslen:58 0ms
m31200| Thu Jun 14 01:30:12 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Thu Jun 14 01:30:12 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31200| Thu Jun 14 01:30:12 [rsStart] replSet info no seed hosts were specified on the --replSet command line
m31200| Thu Jun 14 01:30:13 [initandlisten] connection accepted from 127.0.0.1:49982 #2 (2 connections now open)
[ connection to domU-12-31-39-01-70-B4:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs1",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 1,
"node" : 1,
"set" : "test-rs1"
},
"verbose" : 6,
"arbiter" : true,
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs1-1'
Thu Jun 14 01:30:13 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 1 --port 31201 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-1 -vvvvvv
m31201| note: noprealloc may hurt performance in many applications
m31201| Thu Jun 14 01:30:13
m31201| Thu Jun 14 01:30:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Thu Jun 14 01:30:13
m31201| Thu Jun 14 01:30:13 versionCmpTest passed
m31201| Thu Jun 14 01:30:13 versionArrayTest passed
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcdef: "z23456789" }
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m31201| Thu Jun 14 01:30:13 Matcher::matches() { abcdef: "z23456789" }
m31201| Thu Jun 14 01:30:13 isInRangeTest passed
m31201| Thu Jun 14 01:30:13 shardKeyTest passed
m31201| Thu Jun 14 01:30:13 shardObjTest passed
m31201| Thu Jun 14 01:30:13 [initandlisten] MongoDB starting : pid=23478 port=31201 dbpath=/data/db/test-rs1-1 32-bit host=domU-12-31-39-01-70-B4
m31201| Thu Jun 14 01:30:13 [initandlisten]
m31201| Thu Jun 14 01:30:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Thu Jun 14 01:30:13 [initandlisten] ** Not recommended for production.
m31201| Thu Jun 14 01:30:13 [initandlisten]
m31201| Thu Jun 14 01:30:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Thu Jun 14 01:30:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Thu Jun 14 01:30:13 [initandlisten] ** with --journal, the limit is lower
m31201| Thu Jun 14 01:30:13 [initandlisten]
m31201| Thu Jun 14 01:30:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Thu Jun 14 01:30:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Thu Jun 14 01:30:13 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31201| Thu Jun 14 01:30:13 [initandlisten] options: { dbpath: "/data/db/test-rs1-1", noprealloc: true, oplogSize: 1, port: 31201, replSet: "test-rs1", rest: true, smallfiles: true, vvvvvv: true }
m31201| Thu Jun 14 01:30:13 [initandlisten] flushing directory /data/db/test-rs1-1
m31201| Thu Jun 14 01:30:13 [initandlisten] enter repairDatabases (to check pdfile version #)
m31201| Thu Jun 14 01:30:13 [initandlisten] done repairDatabases
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: snapshot
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: ClientCursorMonitor
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: PeriodicTask::Runner
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: TTLMonitor
m31201| Thu Jun 14 01:30:13 [rsStart] replSet beginning startup...
m31201| Thu Jun 14 01:30:13 [rsStart] loadConfig() local.system.replset
m31201| Thu Jun 14 01:30:13 [rsStart] ReplSetConfig load :domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:30:13 [rsStart] getMyAddrs(): [127.0.0.1] [10.255.119.66] [::1] [fe80::1031:39ff:fe01:70b4%eth0]
m31201| Thu Jun 14 01:30:13 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m31201| Thu Jun 14 01:30:13 [websvr] admin web console waiting for connections on port 32201
m31201| Thu Jun 14 01:30:13 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m31201| Thu Jun 14 01:30:13 [initandlisten] waiting for connections on port 31201
m31201| Thu Jun 14 01:30:13 [rsStart] getallIPs("domU-12-31-39-01-70-B4"): [10.255.119.66]
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: ConnectBG
m31201| Thu Jun 14 01:30:13 [initandlisten] connection accepted from 10.255.119.66:47816 #1 (1 connection now open)
m31201| Thu Jun 14 01:30:13 [conn1] runQuery called local.system.replset {}
m31201| Thu Jun 14 01:30:13 [conn1] opening db: local
m31201| Thu Jun 14 01:30:13 [conn1] query local.system.replset ntoreturn:1 keyUpdates:0 locks(micros) W:145 r:89 nreturned:0 reslen:20 0ms
m31201| Thu Jun 14 01:30:13 [conn1] runQuery called local.$cmd { count: "system.replset", query: {} }
m31201| Thu Jun 14 01:30:13 [conn1] run command local.$cmd { count: "system.replset", query: {} }
m31201| Thu Jun 14 01:30:13 [conn1] command local.$cmd command: { count: "system.replset", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:145 r:109 reslen:58 0ms
m31201| Thu Jun 14 01:30:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Thu Jun 14 01:30:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Thu Jun 14 01:30:13 [rsStart] replSet info no seed hosts were specified on the --replSet command line
m31201| Thu Jun 14 01:30:13 BackgroundJob starting: DataFileSync
[
connection to domU-12-31-39-01-70-B4:31200,
connection to domU-12-31-39-01-70-B4:31201
]
{
"replSetInitiate" : {
"_id" : "test-rs1",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31200"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31201",
"arbiterOnly" : true
}
]
}
}
m31201| Thu Jun 14 01:30:13 [initandlisten] connection accepted from 127.0.0.1:49818 #2 (2 connections now open)
m31200| Thu Jun 14 01:30:13 [conn2] runQuery called admin.$cmd { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201", arbiterOnly: true } ] } }
m31200| Thu Jun 14 01:30:13 [conn2] run command admin.$cmd { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201", arbiterOnly: true } ] } }
m31200| Thu Jun 14 01:30:13 [conn2] command: { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201", arbiterOnly: true } ] } }
m31200| Thu Jun 14 01:30:13 [conn2] replSet replSetInitiate admin command received from client
m31200| Thu Jun 14 01:30:13 [conn2] replSet replSetInitiate config object parses ok, 2 members specified
m31200| Thu Jun 14 01:30:13 BackgroundJob starting: ConnectBG
m31201| Thu Jun 14 01:30:13 [initandlisten] connection accepted from 10.255.119.66:47818 #3 (3 connections now open)
m31201| Thu Jun 14 01:30:13 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: -1, pv: 1, checkEmpty: true, from: "" }
m31201| Thu Jun 14 01:30:13 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: -1, pv: 1, checkEmpty: true, from: "" }
m31201| Thu Jun 14 01:30:13 [conn3] command: { replSetHeartbeat: "test-rs1", v: -1, pv: 1, checkEmpty: true, from: "" }
m31201| Thu Jun 14 01:30:13 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: -1, pv: 1, checkEmpty: true, from: "" } ntoreturn:1 keyUpdates:0 reslen:82 0ms
m31200| Thu Jun 14 01:30:13 [conn2] replSet replSetInitiate all members seem up
m31200| Thu Jun 14 01:30:13 [conn2] ******
m31200| Thu Jun 14 01:30:13 [conn2] creating replication oplog of size: 40MB...
m31200| Thu Jun 14 01:30:13 [conn2] create collection local.oplog.rs { size: 41943040.0, capped: true, autoIndexId: false }
m31200| Thu Jun 14 01:30:13 [conn2] mmf create /data/db/test-rs1-0/local.ns
m31200| Thu Jun 14 01:30:13 [FileAllocator] allocating new datafile /data/db/test-rs1-0/local.ns, filling with zeroes...
m31200| Thu Jun 14 01:30:13 [FileAllocator] creating directory /data/db/test-rs1-0/_tmp
m31200| Thu Jun 14 01:30:13 [FileAllocator] flushing directory /data/db/test-rs1-0
m31200| Thu Jun 14 01:30:13 [FileAllocator] flushing directory /data/db/test-rs1-0
m31200| Thu Jun 14 01:30:13 [FileAllocator] done allocating datafile /data/db/test-rs1-0/local.ns, size: 16MB, took 0.23 secs
m31200| Thu Jun 14 01:30:13 [conn2] mmf finishOpening 0xb03fe000 /data/db/test-rs1-0/local.ns len:16777216
m31200| Thu Jun 14 01:30:13 [conn2] mmf create /data/db/test-rs1-0/local.0
m31200| Thu Jun 14 01:30:13 [FileAllocator] allocating new datafile /data/db/test-rs1-0/local.0, filling with zeroes...
m31200| Thu Jun 14 01:30:13 [FileAllocator] flushing directory /data/db/test-rs1-0
m31200| Thu Jun 14 01:30:14 [FileAllocator] done allocating datafile /data/db/test-rs1-0/local.0, size: 64MB, took 1.248 secs
m31200| Thu Jun 14 01:30:14 [conn2] mmf finishOpening 0xac3fe000 /data/db/test-rs1-0/local.0 len:67108864
m31200| Thu Jun 14 01:30:14 [conn2] allocExtent local.oplog.rs size 41943296 0
m31200| Thu Jun 14 01:30:14 [conn2] New namespace: local.oplog.rs
m31200| Thu Jun 14 01:30:14 [conn2] allocExtent local.system.namespaces size 5120 0
m31200| Thu Jun 14 01:30:14 [conn2] New namespace: local.system.namespaces
m31200| Thu Jun 14 01:30:14 [conn2] ******
m31200| Thu Jun 14 01:30:14 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Thu Jun 14 01:30:14 [conn2] allocExtent local.system.replset size 11264 0
m31200| Thu Jun 14 01:30:14 [conn2] New namespace: local.system.replset
m31200| Thu Jun 14 01:30:14 [conn2] replSet saveConfigLocally done
m31200| Thu Jun 14 01:30:14 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Thu Jun 14 01:30:14 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31200" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31201", arbiterOnly: true } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:112 1519ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31200| Thu Jun 14 01:30:14 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:14 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:14 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:182 0ms
m31201| Thu Jun 14 01:30:14 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:14 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:14 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:201 0ms
m31200| Thu Jun 14 01:30:16 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:16 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:16 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:182 0ms
m31201| Thu Jun 14 01:30:16 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:16 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:16 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:201 0ms
m31200| Thu Jun 14 01:30:18 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:18 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:18 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:182 0ms
m31201| Thu Jun 14 01:30:18 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:18 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:18 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:201 0ms
m31200| Thu Jun 14 01:30:20 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:20 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:20 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:182 0ms
m31201| Thu Jun 14 01:30:20 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:20 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:20 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:201 0ms
m31200| Thu Jun 14 01:30:22 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:22 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:22 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:182 0ms
m31201| Thu Jun 14 01:30:22 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:22 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:22 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:201 0ms
m31200| Thu Jun 14 01:30:22 [rsStart] ReplSetConfig load :domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:30:22 [conn1] runQuery called local.system.replset {}
m31200| Thu Jun 14 01:30:22 [conn1] query local.system.replset ntoreturn:1 keyUpdates:0 locks(micros) W:140 r:180 nreturned:1 reslen:196 0ms
m31200| Thu Jun 14 01:30:22 [conn1] runQuery called local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:22 [conn1] run command local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:22 [conn1] command local.$cmd command: { count: "system.replset", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:140 r:198 reslen:48 0ms
m31200| Thu Jun 14 01:30:22 [rsStart] replSet load config ok from self
m31200| Thu Jun 14 01:30:22 [rsStart] replSet I am domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:30:22 [rsStart] replSet STARTUP2
m31200| Thu Jun 14 01:30:22 BackgroundJob starting: rsHealthPoll
m31201| Thu Jun 14 01:30:22 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:22 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:22 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:22 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:72 0ms
m31200| Thu Jun 14 01:30:22 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is up
m31200| Thu Jun 14 01:30:22 BackgroundJob starting: rsMgr
m31200| Thu Jun 14 01:30:22 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31200| Thu Jun 14 01:30:22 [rsSync] replSet SECONDARY
m31200| Thu Jun 14 01:30:22 [rsBackgroundSync] replset bgsync fetch queue set to: 4fd976e6:1 0
m31200| Thu Jun 14 01:30:22 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:22 BackgroundJob starting: rsGhostSync
m31201| Thu Jun 14 01:30:23 [rsStart] ReplSetConfig load :domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:30:23 [conn1] runQuery called local.system.replset {}
m31201| Thu Jun 14 01:30:23 [conn1] query local.system.replset ntoreturn:1 keyUpdates:0 locks(micros) W:145 r:141 nreturned:0 reslen:20 0ms
m31201| Thu Jun 14 01:30:23 [conn1] runQuery called local.$cmd { count: "system.replset", query: {} }
m31201| Thu Jun 14 01:30:23 [conn1] run command local.$cmd { count: "system.replset", query: {} }
m31201| Thu Jun 14 01:30:23 [conn1] command local.$cmd command: { count: "system.replset", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:145 r:154 reslen:58 0ms
m31201| Thu Jun 14 01:30:23 [rsStart] ReplSetConfig load :domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:30:23 [rsStart] trying to contact domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:30:23 BackgroundJob starting: ConnectBG
m31200| Thu Jun 14 01:30:23 [initandlisten] connection accepted from 10.255.119.66:35433 #3 (3 connections now open)
m31200| Thu Jun 14 01:30:23 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: -2, pv: 1, checkEmpty: false, from: "" }
m31200| Thu Jun 14 01:30:23 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: -2, pv: 1, checkEmpty: false, from: "" }
m31200| Thu Jun 14 01:30:23 [conn3] command: { replSetHeartbeat: "test-rs1", v: -2, pv: 1, checkEmpty: false, from: "" }
m31200| Thu Jun 14 01:30:23 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: -2, pv: 1, checkEmpty: false, from: "" } ntoreturn:1 keyUpdates:0 reslen:308 0ms
m31200| Thu Jun 14 01:30:23 [conn3] runQuery called local.system.replset {}
m31200| Thu Jun 14 01:30:23 [conn3] query local.system.replset ntoreturn:1 keyUpdates:0 locks(micros) r:33 nreturned:1 reslen:196 0ms
m31200| Thu Jun 14 01:30:23 [conn3] runQuery called local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:23 [conn3] run command local.$cmd { count: "system.replset", query: {} }
m31200| Thu Jun 14 01:30:23 [conn3] command local.$cmd command: { count: "system.replset", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:48 0ms
m31201| Thu Jun 14 01:30:23 [rsStart] replSet load config ok from domU-12-31-39-01-70-B4:31200
m31201| Thu Jun 14 01:30:23 [rsStart] replSet I am domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:30:23 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Thu Jun 14 01:30:23 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Thu Jun 14 01:30:23 [rsStart] mmf create /data/db/test-rs1-1/local.ns
m31201| Thu Jun 14 01:30:23 [FileAllocator] allocating new datafile /data/db/test-rs1-1/local.ns, filling with zeroes...
m31201| Thu Jun 14 01:30:23 [FileAllocator] creating directory /data/db/test-rs1-1/_tmp
m31201| Thu Jun 14 01:30:23 [FileAllocator] flushing directory /data/db/test-rs1-1
m31201| Thu Jun 14 01:30:23 BackgroundJob starting: rsHealthPoll
m31201| Thu Jun 14 01:30:23 [rsHealthPoll] replSet not initialized yet, skipping health poll this round
m31201| Thu Jun 14 01:30:23 [FileAllocator] flushing directory /data/db/test-rs1-1
m31201| Thu Jun 14 01:30:23 [FileAllocator] done allocating datafile /data/db/test-rs1-1/local.ns, size: 16MB, took 0.24 secs
m31201| Thu Jun 14 01:30:23 [rsStart] mmf finishOpening 0xafabb000 /data/db/test-rs1-1/local.ns len:16777216
m31201| Thu Jun 14 01:30:23 [rsStart] mmf create /data/db/test-rs1-1/local.0
m31201| Thu Jun 14 01:30:23 [FileAllocator] allocating new datafile /data/db/test-rs1-1/local.0, filling with zeroes...
m31201| Thu Jun 14 01:30:23 [FileAllocator] flushing directory /data/db/test-rs1-1
m31201| Thu Jun 14 01:30:23 [FileAllocator] done allocating datafile /data/db/test-rs1-1/local.0, size: 16MB, took 0.241 secs
m31201| Thu Jun 14 01:30:23 [rsStart] mmf finishOpening 0xaeabb000 /data/db/test-rs1-1/local.0 len:16777216
m31201| Thu Jun 14 01:30:23 [rsStart] mmf close
m31201| Thu Jun 14 01:30:23 [rsStart] allocExtent local.system.replset size 11264 0
m31201| Thu Jun 14 01:30:23 [rsStart] New namespace: local.system.replset
m31201| Thu Jun 14 01:30:23 [rsStart] allocExtent local.system.namespaces size 2304 0
m31201| Thu Jun 14 01:30:23 [rsStart] New namespace: local.system.namespaces
m31201| Thu Jun 14 01:30:23 [rsStart] replSet saveConfigLocally done
m31201| Thu Jun 14 01:30:23 [rsStart] replSet STARTUP2
m31201| Thu Jun 14 01:30:23 BackgroundJob starting: rsMgr
m31201| Thu Jun 14 01:30:23 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31200| Thu Jun 14 01:30:23 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:24 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:24 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:24 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:260 0ms
m31201| Thu Jun 14 01:30:24 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:24 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:24 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:274 0ms
m31201| Thu Jun 14 01:30:24 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:24 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:24 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:24 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:24 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state STARTUP2
m31200| Thu Jun 14 01:30:24 BackgroundJob starting: MultiCommandJob
m31201| Thu Jun 14 01:30:24 [conn3] runQuery called admin.$cmd { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:24 [conn3] run command admin.$cmd { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:24 [conn3] command: { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:24 [conn3] command admin.$cmd command: { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms
m31200| Thu Jun 14 01:30:24 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31201 would veto
m31200| Thu Jun 14 01:30:24 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:25 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:25 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:25 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:25 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
m31201| Thu Jun 14 01:30:25 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is up
m31201| Thu Jun 14 01:30:25 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state SECONDARY
m31200| Thu Jun 14 01:30:25 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:26 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:26 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:26 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:260 0ms
m31201| Thu Jun 14 01:30:26 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:26 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:26 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:274 0ms
m31201| Thu Jun 14 01:30:26 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:26 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:26 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:26 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:26 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:27 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:27 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:27 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:27 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
m31200| Thu Jun 14 01:30:27 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:28 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:28 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:28 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:260 0ms
m31201| Thu Jun 14 01:30:28 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:28 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:28 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:274 0ms
m31201| Thu Jun 14 01:30:28 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:28 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:28 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:28 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:28 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:29 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:29 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:29 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:29 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
m31200| Thu Jun 14 01:30:29 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:30 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:30 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:30 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:260 0ms
m31201| Thu Jun 14 01:30:30 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:30 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:30 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:274 0ms
m31201| Thu Jun 14 01:30:30 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:30 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:30 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:30 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:30 [rsSyncNotifier] replset markOplog: 0:0 4fd976e6:1
m31200| Thu Jun 14 01:30:30 BackgroundJob starting: MultiCommandJob
m31201| Thu Jun 14 01:30:30 [conn3] runQuery called admin.$cmd { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:30 [conn3] run command admin.$cmd { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:30 [conn3] command: { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 }
m31201| Thu Jun 14 01:30:30 [conn3] command admin.$cmd command: { replSetFresh: 1, set: "test-rs1", opTime: new Date(5753760729157074945), who: "domU-12-31-39-01-70-B4:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms
m31200| Thu Jun 14 01:30:30 [rsMgr] replSet dev we are freshest of up nodes, nok:1 nTies:0
m31200| Thu Jun 14 01:30:30 [rsMgr] replSet info electSelf 0
m31200| Thu Jun 14 01:30:30 BackgroundJob starting: MultiCommandJob
m31201| Thu Jun 14 01:30:30 [conn3] runQuery called admin.$cmd { replSetElect: 1, set: "test-rs1", who: "domU-12-31-39-01-70-B4:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd976f6ad9e96d4a398643e') }
m31201| Thu Jun 14 01:30:30 [conn3] run command admin.$cmd { replSetElect: 1, set: "test-rs1", who: "domU-12-31-39-01-70-B4:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd976f6ad9e96d4a398643e') }
m31201| Thu Jun 14 01:30:30 [conn3] command: { replSetElect: 1, set: "test-rs1", who: "domU-12-31-39-01-70-B4:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd976f6ad9e96d4a398643e') }
m31201| Thu Jun 14 01:30:30 [conn3] replSet received elect msg { replSetElect: 1, set: "test-rs1", who: "domU-12-31-39-01-70-B4:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd976f6ad9e96d4a398643e') }
m31201| Thu Jun 14 01:30:30 [conn3] replSet attempting to relinquish
m31201| Thu Jun 14 01:30:30 [conn3] replSet RECOVERING
m31201| Thu Jun 14 01:30:30 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31200 (0)
m31201| Thu Jun 14 01:30:30 [conn3] command admin.$cmd command: { replSetElect: 1, set: "test-rs1", who: "domU-12-31-39-01-70-B4:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd976f6ad9e96d4a398643e') } ntoreturn:1 keyUpdates:0 reslen:66 0ms
m31200| Thu Jun 14 01:30:30 [rsMgr] replSet election succeeded, assuming primary role
m31200| Thu Jun 14 01:30:30 [rsMgr] replSet assuming primary
m31200| Thu Jun 14 01:30:30 [rsMgr] runQuery called local.oplog.rs { query: {}, orderby: { $natural: -1 } }
m31200| Thu Jun 14 01:30:30 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31200| Thu Jun 14 01:30:30 [rsMgr] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:119 nreturned:1 reslen:99 0ms
m31200| Thu Jun 14 01:30:30 [rsMgr] replSet PRIMARY
m31200| Thu Jun 14 01:30:31 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:31 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:31 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:31 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
m31201| Thu Jun 14 01:30:31 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31200 is now in state PRIMARY
m31200| Thu Jun 14 01:30:32 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:32 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:32 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:35 reslen:302 0ms
m31200| Thu Jun 14 01:30:32 [conn2] opening db: admin
m31200| Thu Jun 14 01:30:32 [conn2] mmf create /data/db/test-rs1-0/admin.ns
m31201| Thu Jun 14 01:30:32 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:32 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:32 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:316 0ms
m31200| Thu Jun 14 01:30:32 [FileAllocator] allocating new datafile /data/db/test-rs1-0/admin.ns, filling with zeroes...
m31200| Thu Jun 14 01:30:32 [FileAllocator] flushing directory /data/db/test-rs1-0
m31201| Thu Jun 14 01:30:32 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:32 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:32 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:32 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:32 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31201 is now in state ARBITER
m31200| Thu Jun 14 01:30:33 [FileAllocator] done allocating datafile /data/db/test-rs1-0/admin.ns, size: 16MB, took 0.252 secs
m31200| Thu Jun 14 01:30:33 [conn2] mmf finishOpening 0xa76f7000 /data/db/test-rs1-0/admin.ns len:16777216
m31200| Thu Jun 14 01:30:33 [conn2] mmf create /data/db/test-rs1-0/admin.0
m31200| Thu Jun 14 01:30:33 [FileAllocator] allocating new datafile /data/db/test-rs1-0/admin.0, filling with zeroes...
m31200| Thu Jun 14 01:30:33 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:33 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:33 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:33 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
m31200| Thu Jun 14 01:30:33 [FileAllocator] flushing directory /data/db/test-rs1-0
ReplSetTest Timestamp(1339651833000, 1)
ReplSetTest await synced=true
m31200| Thu Jun 14 01:30:33 [FileAllocator] done allocating datafile /data/db/test-rs1-0/admin.0, size: 16MB, took 0.262 secs
m31200| Thu Jun 14 01:30:33 [conn2] mmf finishOpening 0xa66f7000 /data/db/test-rs1-0/admin.0 len:16777216
m31200| Thu Jun 14 01:30:33 [conn2] mmf close
m31200| Thu Jun 14 01:30:33 [conn2] allocExtent admin.foo size 2048 0
m31200| Thu Jun 14 01:30:33 [conn2] adding _id index for collection admin.foo
m31200| Thu Jun 14 01:30:33 [conn2] allocExtent admin.system.indexes size 3584 0
m31200| Thu Jun 14 01:30:33 [conn2] New namespace: admin.system.indexes
m31200| Thu Jun 14 01:30:33 [conn2] allocExtent admin.system.namespaces size 2304 0
m31200| Thu Jun 14 01:30:33 [conn2] New namespace: admin.system.namespaces
m31200| Thu Jun 14 01:30:33 [conn2] build index admin.foo { _id: 1 }
m31200| mem info: before index start vsize: 303 resident: 48 mapped: 112
m31200| Thu Jun 14 01:30:33 [conn2] external sort root: /data/db/test-rs1-0/_tmp/esort.1339651833.0/
m31200| mem info: before final sort vsize: 303 resident: 48 mapped: 112
m31200| mem info: after final sort vsize: 303 resident: 48 mapped: 112
m31200| Thu Jun 14 01:30:33 [conn2] external sort used : 0 files in 0 secs
m31200| Thu Jun 14 01:30:33 [conn2] allocExtent admin.foo.$_id_ size 36864 0
m31200| Thu Jun 14 01:30:33 [conn2] New namespace: admin.foo.$_id_
m31200| Thu Jun 14 01:30:33 [conn2] done building bottom layer, going to commit
m31200| Thu Jun 14 01:30:33 [conn2] build index done. scanned 0 total records. 0 secs
m31200| Thu Jun 14 01:30:33 [conn2] New namespace: admin.foo
m31200| Thu Jun 14 01:30:33 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:1520552 w:525998 525ms
m31200| Thu Jun 14 01:30:33 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:33 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31200| Thu Jun 14 01:30:33 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 w:525998 reslen:302 0ms
m31200| Thu Jun 14 01:30:33 [conn2] runQuery called local.oplog.rs { query: {}, orderby: { $natural: -1.0 } }
m31200| Thu Jun 14 01:30:33 [conn2] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:1520552 r:69 w:525998 nreturned:1 reslen:112 0ms
m31201| Thu Jun 14 01:30:33 [conn2] runQuery called admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:33 [conn2] run command admin.$cmd { ismaster: 1.0 }
m31201| Thu Jun 14 01:30:33 [conn2] command admin.$cmd command: { ismaster: 1.0 } ntoreturn:1 keyUpdates:0 reslen:316 0ms
m31201| Thu Jun 14 01:30:33 [conn2] runQuery called admin.$cmd { replSetGetStatus: 1.0 }
m31201| Thu Jun 14 01:30:33 [conn2] run command admin.$cmd { replSetGetStatus: 1.0 }
m31201| Thu Jun 14 01:30:33 [conn2] command: { replSetGetStatus: 1.0 }
m31201| Thu Jun 14 01:30:33 [conn2] command admin.$cmd command: { replSetGetStatus: 1.0 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
Thu Jun 14 01:30:33 starting new replica set monitor for replica set test-rs1 with seed of domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201
Thu Jun 14 01:30:33 successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set test-rs1
m31200| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:35434 #4 (4 connections now open)
m31200| Thu Jun 14 01:30:33 [conn4] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn4] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn4] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
Thu Jun 14 01:30:33 changing hosts to { 0: "domU-12-31-39-01-70-B4:31200" } from test-rs1/
Thu Jun 14 01:30:33 trying to add new host domU-12-31-39-01-70-B4:31200 to replica set test-rs1
Thu Jun 14 01:30:33 successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set test-rs1
m31200| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:35435 #5 (5 connections now open)
m31200| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:35436 #6 (6 connections now open)
m31200| Thu Jun 14 01:30:33 [conn6] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] command: { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31200| Thu Jun 14 01:30:33 [conn4] Socket recv() conn closed? 10.255.119.66:35434
m31200| Thu Jun 14 01:30:33 [conn4] SocketException: remote: 10.255.119.66:35434 error: 9001 socket exception [0] server [10.255.119.66:35434]
m31200| Thu Jun 14 01:30:33 [conn4] end connection 10.255.119.66:35434 (5 connections now open)
Thu Jun 14 01:30:33 successfully connected to seed domU-12-31-39-01-70-B4:31201 for replica set test-rs1
m31201| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:47823 #4 (4 connections now open)
m31201| Thu Jun 14 01:30:33 [conn4] runQuery called admin.$cmd { ismaster: 1 }
m31201| Thu Jun 14 01:30:33 [conn4] run command admin.$cmd { ismaster: 1 }
m31201| Thu Jun 14 01:30:33 [conn4] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:316 0ms
m31201| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:47824 #5 (5 connections now open)
m31201| Thu Jun 14 01:30:33 [conn5] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:33 [conn5] run command admin.$cmd { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:33 [conn5] command: { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:33 [conn5] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31201| Thu Jun 14 01:30:33 [conn4] Socket recv() conn closed? 10.255.119.66:47823
m31201| Thu Jun 14 01:30:33 [conn4] SocketException: remote: 10.255.119.66:47823 error: 9001 socket exception [0] server [10.255.119.66:47823]
m31201| Thu Jun 14 01:30:33 [conn4] end connection 10.255.119.66:47823 (4 connections now open)
m31200| Thu Jun 14 01:30:33 [conn5] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn5] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn5] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m31200| Thu Jun 14 01:30:33 [conn6] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] command: { replSetGetStatus: 1 }
Thu Jun 14 01:30:33 Primary for replica set test-rs1 changed to domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:30:33 [conn6] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31200| Thu Jun 14 01:30:33 [conn5] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn5] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:33 [conn5] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m31200| Thu Jun 14 01:30:33 [conn6] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] command: { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:33 [conn6] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
Thu Jun 14 01:30:33 replica set monitor for replica set test-rs1 started, address is test-rs1/domU-12-31-39-01-70-B4:31200
Thu Jun 14 01:30:33 [ReplicaSetMonitorWatcher] starting
m31200| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:35439 #7 (6 connections now open)
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:30:33 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0 -vvv
m29000| Thu Jun 14 01:30:33
m29000| Thu Jun 14 01:30:33 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:30:33
m29000| Thu Jun 14 01:30:33 BackgroundJob starting: DataFileSync
m29000| Thu Jun 14 01:30:33 versionCmpTest passed
m29000| Thu Jun 14 01:30:33 versionArrayTest passed
m29000| Thu Jun 14 01:30:33 isInRangeTest passed
m29000| Thu Jun 14 01:30:33 shardKeyTest passed
m29000| Thu Jun 14 01:30:33 shardObjTest passed
m29000| Thu Jun 14 01:30:33 [initandlisten] MongoDB starting : pid=23522 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:30:33 [initandlisten]
m29000| Thu Jun 14 01:30:33 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:30:33 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:30:33 [initandlisten]
m29000| Thu Jun 14 01:30:33 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:30:33 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:30:33 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:30:33 [initandlisten]
m29000| Thu Jun 14 01:30:33 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:30:33 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:30:33 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:30:33 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000, vvv: true }
m29000| Thu Jun 14 01:30:33 [initandlisten] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:33 [initandlisten] opening db: local
m29000| Thu Jun 14 01:30:33 [initandlisten] enter repairDatabases (to check pdfile version #)
m29000| Thu Jun 14 01:30:33 [initandlisten] done repairDatabases
m29000| Thu Jun 14 01:30:33 BackgroundJob starting: snapshot
m29000| Thu Jun 14 01:30:33 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m29000| Thu Jun 14 01:30:33 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:30:33 BackgroundJob starting: ClientCursorMonitor
m29000| Thu Jun 14 01:30:33 BackgroundJob starting: PeriodicTask::Runner
m29000| Thu Jun 14 01:30:33 BackgroundJob starting: TTLMonitor
m29000| Thu Jun 14 01:30:33 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m29000| Thu Jun 14 01:30:33 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:30:33 [websvr] ERROR: addr already in use
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 127.0.0.1:40773 #1 (1 connection now open)
ShardingTest test :
{
"config" : "domU-12-31-39-01-70-B4:29000",
"shards" : [
connection to domU-12-31-39-01-70-B4:30000,
connection to test-rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201
]
}
Thu Jun 14 01:30:33 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000 -v
m29000| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:52417 #2 (2 connections now open)
m29000| Thu Jun 14 01:30:33 [conn2] opening db: config
m29000| Thu Jun 14 01:30:33 [conn2] mmf create /data/db/test-config0/config.ns
m29000| Thu Jun 14 01:30:33 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:30:33 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29000| Thu Jun 14 01:30:33 [FileAllocator] flushing directory /data/db/test-config0
m30999| Thu Jun 14 01:30:33 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:30:33 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23537 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:30:33 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:30:33 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:30:33 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:30:33 [mongosMain] config string : domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:30:33 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:30:33 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:33 [initandlisten] connection accepted from 10.255.119.66:52419 #3 (3 connections now open)
m30999| Thu Jun 14 01:30:33 [mongosMain] connected connection!
m29000| Thu Jun 14 01:30:33 [FileAllocator] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:33 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.25 secs
m29000| Thu Jun 14 01:30:33 [conn2] mmf finishOpening 0xb18bd000 /data/db/test-config0/config.ns len:16777216
m29000| Thu Jun 14 01:30:33 [conn2] mmf create /data/db/test-config0/config.0
m29000| Thu Jun 14 01:30:33 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:30:33 [FileAllocator] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:34 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.298 secs
m29000| Thu Jun 14 01:30:34 [conn2] mmf finishOpening 0xb08bd000 /data/db/test-config0/config.0 len:16777216
m29000| Thu Jun 14 01:30:34 [conn2] mmf close
m29000| Thu Jun 14 01:30:34 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:30:34 [conn2] allocExtent config.settings size 2304 0
m29000| Thu Jun 14 01:30:34 [conn2] adding _id index for collection config.settings
m29000| Thu Jun 14 01:30:34 [conn2] allocExtent config.system.indexes size 3840 0
m29000| Thu Jun 14 01:30:34 [conn2] New namespace: config.system.indexes
m29000| Thu Jun 14 01:30:34 [conn2] allocExtent config.system.namespaces size 2304 0
m29000| Thu Jun 14 01:30:34 [conn2] New namespace: config.system.namespaces
m29000| Thu Jun 14 01:30:34 [conn2] build index config.settings { _id: 1 }
m29000| mem info: before index start vsize: 143 resident: 31 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn2] external sort root: /data/db/test-config0/_tmp/esort.1339651834.0/
m29000| mem info: before final sort vsize: 143 resident: 31 mapped: 32
m29000| mem info: after final sort vsize: 143 resident: 31 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn2] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn2] allocExtent config.settings.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn2] New namespace: config.settings.$_id_
m29000| Thu Jun 14 01:30:34 [conn2] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn2] New namespace: config.settings
m29000| Thu Jun 14 01:30:34 [conn2] insert config.settings keyUpdates:0 locks(micros) w:574173 574ms
m29000| Thu Jun 14 01:30:34 [conn3] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:163 0ms
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:163 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:822 w:163 reslen:203 0ms
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:30:34 [CheckConfigServers] creating new connection to:domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called config.version {}
m29000| Thu Jun 14 01:30:34 [conn3] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:932 w:163 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called config.$cmd { count: "shards", query: {} }
m29000| Thu Jun 14 01:30:34 [conn3] run command config.$cmd { count: "shards", query: {} }
m29000| Thu Jun 14 01:30:34 [conn3] command config.$cmd command: { count: "shards", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1002 w:163 reslen:58 0ms
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called config.$cmd { count: "databases", query: {} }
m29000| Thu Jun 14 01:30:34 [conn3] run command config.$cmd { count: "databases", query: {} }
m29000| Thu Jun 14 01:30:34 [conn3] command config.$cmd command: { count: "databases", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1062 w:163 reslen:58 0ms
m29000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52422 #4 (4 connections now open)
m29000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52423 #5 (5 connections now open)
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called admin.$cmd { ismaster: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] run command admin.$cmd { ismaster: 1 }
m29000| Thu Jun 14 01:30:34 [conn3] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1062 w:163 reslen:90 0ms
m29000| Thu Jun 14 01:30:34 [conn5] allocExtent config.version size 1536 0
m29000| Thu Jun 14 01:30:34 [conn5] adding _id index for collection config.version
m29000| Thu Jun 14 01:30:34 [conn5] build index config.version { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn5] external sort root: /data/db/test-config0/_tmp/esort.1339651834.1/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn5] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn5] allocExtent config.version.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn5] New namespace: config.version.$_id_
m29000| Thu Jun 14 01:30:34 [conn5] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn5] New namespace: config.version
m29000| Thu Jun 14 01:30:34 [conn5] insert config.version keyUpdates:0 locks(micros) w:1004 1ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.version {}
m29000| Thu Jun 14 01:30:34 [conn5] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:58 w:1004 nreturned:1 reslen:47 0ms
m29000| Thu Jun 14 01:30:34 [conn3] runQuery called config.settings {}
m29000| Thu Jun 14 01:30:34 [conn3] query config.settings ntoreturn:0 keyUpdates:0 locks(micros) r:1109 w:163 nreturned:1 reslen:59 0ms
m29000| Thu Jun 14 01:30:34 [conn4] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:104 0ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:104 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:771 w:104 reslen:249 0ms
m29000| Thu Jun 14 01:30:34 [conn3] create collection config.chunks {}
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.chunks size 8192 0
m29000| Thu Jun 14 01:30:34 [conn3] adding _id index for collection config.chunks
m29000| Thu Jun 14 01:30:34 [conn3] build index config.chunks { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.2/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m30999| Thu Jun 14 01:30:34 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [mongosMain] connected connection!
m30999| Thu Jun 14 01:30:34 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:30:34 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:30:34 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:34 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:30:34 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:34 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:30:34 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: PeriodicTask::Runner
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.chunks.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.chunks.$_id_
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.chunks
m29000| Thu Jun 14 01:30:34 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:30:34 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.3/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.chunks.$ns_1_min_1 size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.chunks.$ns_1_min_1
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1109 w:1832 1ms
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:140 w:1004 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.4/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.chunks.$ns_1_shard_1_min_1 size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.chunks.$ns_1_shard_1_min_1
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1109 w:2510 0ms
m29000| Thu Jun 14 01:30:34 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.5/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.chunks.$ns_1_lastmod_1 size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.chunks.$ns_1_lastmod_1
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1109 w:3297 0ms
m29000| Thu Jun 14 01:30:34 [conn3] create collection config.shards {}
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.shards size 8192 0
m29000| Thu Jun 14 01:30:34 [conn3] adding _id index for collection config.shards
m29000| Thu Jun 14 01:30:34 [conn3] build index config.shards { _id: 1 }
m29000| mem info: before index start Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52424 #6 (6 connections now open)
m29000| vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.6/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.shards.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.shards.$_id_
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.shards
m29000| Thu Jun 14 01:30:34 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:30:34 [conn3] build index config.shards { host: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651834.7/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] allocExtent config.shards.$host_1 size 36864 0
m29000| Thu Jun 14 01:30:34 [conn3] New namespace: config.shards.$host_1
m29000| Thu Jun 14 01:30:34 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1109 w:4972 1ms
m29000| Thu Jun 14 01:30:34 [conn5] allocExtent config.mongos size 4608 0
m29000| Thu Jun 14 01:30:34 [conn5] adding _id index for collection config.mongos
m29000| Thu Jun 14 01:30:34 [conn5] build index config.mongos { _id: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn5] external sort root: /data/db/test-config0/_tmp/esort.1339651834.8/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn5] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn5] allocExtent config.mongos.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn5] New namespace: config.mongos.$_id_
m29000| Thu Jun 14 01:30:34 [conn5] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn5] New namespace: config.mongos
m29000| Thu Jun 14 01:30:34 [conn5] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30999" } update: { $set: { ping: new Date(1339651834227), up: 0, waiting: false } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:140 w:2114 1ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn6] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:30 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.settings { _id: "chunksize" }
m29000| Thu Jun 14 01:30:34 [conn6] query config.settings query: { _id: "chunksize" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:55 reslen:59 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.settings { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn6] query config.settings query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:68 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:84 reslen:1625 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:99 reslen:1713 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:113 reslen:1713 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:171 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn4] allocExtent config.lockpings size 4864 0
m29000| Thu Jun 14 01:30:34 [conn4] adding _id index for collection config.lockpings
m29000| Thu Jun 14 01:30:34 [conn4] build index config.lockpings { _id: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort root: /data/db/test-config0/_tmp/esort.1339651834.9/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] allocExtent config.lockpings.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn4] New namespace: config.lockpings.$_id_
m29000| Thu Jun 14 01:30:34 [conn4] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] New namespace: config.lockpings
m29000| Thu Jun 14 01:30:34 [conn4] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30999:1339651834:1804289383" } update: { $set: { ping: new Date(1339651834232) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:771 w:1077 0ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:771 w:1077 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn6] allocExtent config.locks size 2816 0
m29000| Thu Jun 14 01:30:34 [conn6] adding _id index for collection config.locks
m29000| Thu Jun 14 01:30:34 [conn6] build index config.locks { _id: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn6] external sort root: /data/db/test-config0/_tmp/esort.1339651834.10/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn6] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn6] allocExtent config.locks.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn6] New namespace: config.locks.$_id_
m29000| Thu Jun 14 01:30:34 [conn6] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn6] New namespace: config.locks
m29000| Thu Jun 14 01:30:34 [conn6] insert config.locks keyUpdates:0 locks(micros) r:171 w:760 0ms
m29000| Thu Jun 14 01:30:34 [conn6] running multiple plans
m29000| Thu Jun 14 01:30:34 [conn6] update config.locks query: { _id: "balancer", state: 0 } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30999:1339651834:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30999:1339651834:1804289383", when: new Date(1339651834233), why: "doing balance round", ts: ObjectId('4fd976fa845fce918c6f665b') } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) r:171 w:1097 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:171 w:1097 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called config.locks {}
m29000| Thu Jun 14 01:30:34 [conn4] query config.locks ntoreturn:0 keyUpdates:0 locks(micros) r:804 w:1077 nreturned:1 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:185 w:1097 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn4] remove config.lockpings query: { _id: { $nin: [ "domU-12-31-39-01-70-B4:30999:1339651834:1804289383" ] }, ping: { $lt: new Date(1339306234232) } } keyUpdates:0 locks(micros) r:804 w:1175 0ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:804 w:1175 reslen:67 0ms
m29000| Thu Jun 14 01:30:34 [conn6] update config.locks query: { _id: "balancer" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30999:1339651834:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30999:1339651834:1804289383", when: new Date(1339651834233), why: "doing balance round", ts: ObjectId('4fd976fa845fce918c6f665b') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:185 w:1146 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:185 w:1146 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn4] build index config.lockpings { ping: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort root: /data/db/test-config0/_tmp/esort.1339651834.11/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] not using file. size:31 _compares:0
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] allocExtent config.lockpings.$ping_1 size 36864 0
m29000| Thu Jun 14 01:30:34 [conn4] New namespace: config.lockpings.$ping_1
m29000| Thu Jun 14 01:30:34 [conn4] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn4] build index done. scanned 1 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] insert config.system.indexes keyUpdates:0 locks(micros) r:804 w:1887 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:199 w:1146 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.collections {}
m29000| Thu Jun 14 01:30:34 [conn5] query config.collections ntoreturn:0 keyUpdates:0 locks(micros) r:232 w:2114 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn6] running multiple plans
m29000| Thu Jun 14 01:30:34 [conn6] update config.locks query: { _id: "balancer", ts: ObjectId('4fd976fa845fce918c6f665b') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:199 w:1277 0ms
m29000| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:199 w:1277 reslen:85 0ms
m30999| Thu Jun 14 01:30:34 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:30:34 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:30:34
m30999| Thu Jun 14 01:30:34 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:34 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [Balancer] connected connection!
m30999| Thu Jun 14 01:30:34 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:30:34 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339651834:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:30:34 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:30:34 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651834:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651834:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651834:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:34 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd976fa845fce918c6f665b" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:30:34 [LockPinger] cluster domU-12-31-39-01-70-B4:29000 pinged successfully at Thu Jun 14 01:30:34 2012 by distributed lock pinger 'domU-12-31-39-01-70-B4:29000/domU-12-31-39-01-70-B4:30999:1339651834:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:30:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651834:1804289383' acquired, ts : 4fd976fa845fce918c6f665b
m30999| Thu Jun 14 01:30:34 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:30:34 [Balancer] no collections to balance
m30999| Thu Jun 14 01:30:34 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:30:34 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:30:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651834:1804289383' unlocked.
m29000| Thu Jun 14 01:30:34 [conn5] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30999" } update: { $set: { ping: new Date(1339651834238), up: 0, waiting: true } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:232 w:2180 0ms
Thu Jun 14 01:30:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb domU-12-31-39-01-70-B4:29000 -vv
m30998| Thu Jun 14 01:30:34 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:30:34 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23557 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:30:34 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:30:34 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:30:34 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30998, vv: true }
m30998| Thu Jun 14 01:30:34 [mongosMain] config string : domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:30:34 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:30:34 [mongosMain] connected connection!
m30998| Thu Jun 14 01:30:34 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:30:34 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:34 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:30:34 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:34 [websvr] admin web console waiting for connections on port 31998
m29000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52427 #7 (7 connections now open)
m29000| Thu Jun 14 01:30:34 [conn7] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:108 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:108 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:799 w:108 reslen:476 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called config.version {}
m29000| Thu Jun 14 01:30:34 [conn7] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:850 w:108 nreturned:1 reslen:47 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called config.settings {}
m29000| Thu Jun 14 01:30:34 [conn7] query config.settings ntoreturn:0 keyUpdates:0 locks(micros) r:869 w:108 nreturned:1 reslen:59 0ms
m29000| Thu Jun 14 01:30:34 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:869 w:129 0ms
m29000| Thu Jun 14 01:30:34 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:869 w:142 0ms
m29000| Thu Jun 14 01:30:34 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:869 w:156 0ms
m29000| Thu Jun 14 01:30:34 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:869 w:168 0ms
m29000| Thu Jun 14 01:30:34 [conn7] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) r:869 w:244 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:869 w:244 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1435 w:244 reslen:476 0ms
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:30:34 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:30:34 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52428 #8 (8 connections now open)
m30998| Thu Jun 14 01:30:34 [Balancer] connected connection!
m30998| Thu Jun 14 01:30:34 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:30:34 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:30:34
m30998| Thu Jun 14 01:30:34 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:30:34 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:30:34 [conn8] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn8] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:36 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn8] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30998" } update: { $set: { ping: new Date(1339651834279), up: 0, waiting: false } } nscanned:0 idhack:1 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:36 w:121 0ms
m30998| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:52429 #9 (9 connections now open)
m30998| Thu Jun 14 01:30:34 [Balancer] connected connection!
m30998| Thu Jun 14 01:30:34 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:30:34 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: -1
m30998| Thu Jun 14 01:30:34 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30998| Thu Jun 14 01:30:34 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30998| Thu Jun 14 01:30:34 [Balancer] total clock skew of 0ms for servers domU-12-31-39-01-70-B4:29000 is in 30000ms bounds.
m30998| Thu Jun 14 01:30:34 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651834:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651834:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651834:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:30:34 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd976fa39ddd1b502b5535f" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd976fa845fce918c6f665b" } }
m30998| Thu Jun 14 01:30:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651834:1804289383' acquired, ts : 4fd976fa39ddd1b502b5535f
m30998| Thu Jun 14 01:30:34 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:30:34 [Balancer] no collections to balance
m30998| Thu Jun 14 01:30:34 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:30:34 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:30:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651834:1804289383' unlocked.
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn9] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:27 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.settings { _id: "chunksize" }
m29000| Thu Jun 14 01:30:34 [conn9] query config.settings query: { _id: "chunksize" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:54 reslen:59 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.settings { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn9] query config.settings query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:66 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:82 reslen:1713 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:97 reslen:1713 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:112 reslen:1713 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:127 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn9] running multiple plans
m29000| Thu Jun 14 01:30:34 [conn9] update config.locks query: { _id: "balancer", state: 0, ts: ObjectId('4fd976fa845fce918c6f665b') } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30998:1339651834:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30998:1339651834:1804289383", when: new Date(1339651834281), why: "doing balance round", ts: ObjectId('4fd976fa39ddd1b502b5535f') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:127 w:240 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:127 w:240 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:140 w:240 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn9] update config.locks query: { _id: "balancer" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30998:1339651834:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30998:1339651834:1804289383", when: new Date(1339651834281), why: "doing balance round", ts: ObjectId('4fd976fa39ddd1b502b5535f') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:140 w:284 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:140 w:284 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:34 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:153 w:284 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn8] runQuery called config.collections {}
m29000| Thu Jun 14 01:30:34 [conn8] query config.collections ntoreturn:0 keyUpdates:0 locks(micros) r:105 w:121 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn9] running multiple plans
m29000| Thu Jun 14 01:30:34 [conn9] update config.locks query: { _id: "balancer", ts: ObjectId('4fd976fa39ddd1b502b5535f') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:153 w:400 0ms
m29000| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:153 w:400 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn8] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30998" } update: { $set: { ping: new Date(1339651834283), up: 0, waiting: true } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:105 w:151 0ms
m30998| Thu Jun 14 01:30:34 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30998:1339651834:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:30:34 [conn7] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30998:1339651834:1804289383" } update: { $set: { ping: new Date(1339651834284) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:1435 w:338 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1435 w:338 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called config.locks {}
m29000| Thu Jun 14 01:30:34 [conn7] query config.locks ntoreturn:0 keyUpdates:0 locks(micros) r:1472 w:338 nreturned:1 reslen:256 0ms
m29000| Thu Jun 14 01:30:34 [conn7] remove config.lockpings query: { _id: { $nin: [ "domU-12-31-39-01-70-B4:30998:1339651834:1804289383" ] }, ping: { $lt: new Date(1339306234284) } } keyUpdates:0 locks(micros) r:1472 w:445 0ms
m29000| Thu Jun 14 01:30:34 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1472 w:445 reslen:67 0ms
m29000| Thu Jun 14 01:30:34 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1472 w:460 0ms
m30998| Thu Jun 14 01:30:34 [LockPinger] cluster domU-12-31-39-01-70-B4:29000 pinged successfully at Thu Jun 14 01:30:34 2012 by distributed lock pinger 'domU-12-31-39-01-70-B4:29000/domU-12-31-39-01-70-B4:30998:1339651834:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:30:34 [mongosMain] connection accepted from 127.0.0.1:51444 #1 (1 connection now open)
m29000| Thu Jun 14 01:30:34 [FileAllocator] flushing directory /data/db/test-config0
ShardingTest undefined going to add shard : domU-12-31-39-01-70-B4:30000
m30998| Thu Jun 14 01:30:34 [mongosMain] connection accepted from 127.0.0.1:35781 #1 (1 connection now open)
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called config.databases { _id: "admin" }
m29000| Thu Jun 14 01:30:34 [conn4] query config.databases query: { _id: "admin" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:879 w:1887 reslen:20 0ms
m30999| Thu Jun 14 01:30:34 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.databases { _id: /^admin$/i }
m29000| Thu Jun 14 01:30:34 [conn5] query config.databases query: { _id: /^admin$/i } ntoreturn:1 keyUpdates:0 locks(micros) r:476 w:2180 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn4] allocExtent config.databases size 3328 0
m29000| Thu Jun 14 01:30:34 [conn4] adding _id index for collection config.databases
m29000| Thu Jun 14 01:30:34 [conn4] build index config.databases { _id: 1 }
m29000| mem info: before index start vsize: 149 resident: 33 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort root: /data/db/test-config0/_tmp/esort.1339651834.12/
m29000| mem info: before final sort vsize: 149 resident: 33 mapped: 32
m29000| mem info: after final sort vsize: 149 resident: 33 mapped: 32
m29000| Thu Jun 14 01:30:34 [conn4] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] allocExtent config.databases.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:34 [conn4] New namespace: config.databases.$_id_
m29000| Thu Jun 14 01:30:34 [conn4] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:34 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:34 [conn4] New namespace: config.databases
m29000| Thu Jun 14 01:30:34 [conn4] update config.databases query: { _id: "admin" } update: { _id: "admin", partitioned: false, primary: "config" } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:879 w:3069 1ms
m29000| Thu Jun 14 01:30:34 [conn4] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn4] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:879 w:3069 reslen:85 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards { query: { _id: /^shard/ }, orderby: { _id: -1 } }
m30000| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:53490 #2 (2 connections now open)
m30000| Thu Jun 14 01:30:34 [conn2] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:67 0ms
m30000| Thu Jun 14 01:30:34 [conn2] runQuery called admin.$cmd { isdbgrid: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] run command admin.$cmd { isdbgrid: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] command admin.$cmd command: { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:99 0ms
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards query: { query: { _id: /^shard/ }, orderby: { _id: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:665 w:2180 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards { host: "domU-12-31-39-01-70-B4:30000" }
m30000| Thu Jun 14 01:30:34 [conn2] runQuery called admin.$cmd { isMaster: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] run command admin.$cmd { isMaster: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:90 0ms
m30000| Thu Jun 14 01:30:34 [conn2] runQuery called admin.$cmd { listDatabases: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] run command admin.$cmd { listDatabases: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] command: { listDatabases: 1 }
m30000| Thu Jun 14 01:30:34 [conn2] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:18 r:14 reslen:124 0ms
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards query: { host: "domU-12-31-39-01-70-B4:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:741 w:2180 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn5] insert config.shards keyUpdates:0 locks(micros) r:741 w:2261 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn5] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:741 w:2261 reslen:67 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:765 w:2261 nreturned:1 reslen:83 0ms
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : test-rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201
m30999| Thu Jun 14 01:30:34 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:30:34 [conn] creating new connection to:domU-12-31-39-01-70-B4:30000
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] connected connection!
m30999| Thu Jun 14 01:30:34 [conn] going to add shard: { _id: "shard0000", host: "domU-12-31-39-01-70-B4:30000" }
m30999| Thu Jun 14 01:30:34 [conn] starting new replica set monitor for replica set test-rs1 with seed of domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31200 for replica set test-rs1
m30999| Thu Jun 14 01:30:34 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31200 { setName: "test-rs1", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31200" ], arbiters: [ "domU-12-31-39-01-70-B4:31201" ], primary: "domU-12-31-39-01-70-B4:31200", me: "domU-12-31-39-01-70-B4:31200", maxBsonObjectSize: 16777216, localTime: new Date(1339651834483), ok: 1.0 }
m30999| Thu Jun 14 01:30:34 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31200" } from test-rs1/
m30999| Thu Jun 14 01:30:34 [conn] trying to add new host domU-12-31-39-01-70-B4:31200 to replica set test-rs1
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31200 in replica set test-rs1
m30999| Thu Jun 14 01:30:34 [conn] creating new connection to:domU-12-31-39-01-70-B4:31200
m31200| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:35457 #8 (7 connections now open)
m31200| Thu Jun 14 01:30:34 [conn8] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn8] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn8] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m31200| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:35458 #9 (8 connections now open)
m31200| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:35459 #10 (9 connections now open)
m31200| Thu Jun 14 01:30:34 [conn10] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command: { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31200| Thu Jun 14 01:30:34 [conn8] Socket recv() conn closed? 10.255.119.66:35457
m31200| Thu Jun 14 01:30:34 [conn8] SocketException: remote: 10.255.119.66:35457 error: 9001 socket exception [0] server [10.255.119.66:35457]
m31200| Thu Jun 14 01:30:34 [conn8] end connection 10.255.119.66:35457 (8 connections now open)
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] connected connection!
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs _checkStatus couldn't _find(domU-12-31-39-01-70-B4:31201)
m30999| Thu Jun 14 01:30:34 [conn] replicaSetChange: shard not found for set: test-rs1/domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31201 for replica set test-rs1
m30999| Thu Jun 14 01:30:34 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31201 { setName: "test-rs1", ismaster: false, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31200" ], arbiters: [ "domU-12-31-39-01-70-B4:31201" ], primary: "domU-12-31-39-01-70-B4:31200", arbiterOnly: true, me: "domU-12-31-39-01-70-B4:31201", maxBsonObjectSize: 16777216, localTime: new Date(1339651834485), ok: 1.0 }
m30999| Thu Jun 14 01:30:34 [conn] creating new connection to:domU-12-31-39-01-70-B4:31201
m31201| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:47846 #6 (5 connections now open)
m31201| Thu Jun 14 01:30:34 [conn6] runQuery called admin.$cmd { ismaster: 1 }
m31201| Thu Jun 14 01:30:34 [conn6] run command admin.$cmd { ismaster: 1 }
m31201| Thu Jun 14 01:30:34 [conn6] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:316 0ms
m31201| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:47847 #7 (6 connections now open)
m31201| Thu Jun 14 01:30:34 [conn7] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:34 [conn7] run command admin.$cmd { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:34 [conn7] command: { replSetGetStatus: 1 }
m31201| Thu Jun 14 01:30:34 [conn7] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31201| Thu Jun 14 01:30:34 [conn6] Socket recv() conn closed? 10.255.119.66:47846
m31201| Thu Jun 14 01:30:34 [conn6] SocketException: remote: 10.255.119.66:47846 error: 9001 socket exception [0] server [10.255.119.66:47846]
m31201| Thu Jun 14 01:30:34 [conn6] end connection 10.255.119.66:47846 (5 connections now open)
m31200| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m31200| Thu Jun 14 01:30:34 [conn10] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command: { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31200| Thu Jun 14 01:30:34 [conn9] runQuery called admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn9] run command admin.$cmd { ismaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn9] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m31200| Thu Jun 14 01:30:34 [conn10] runQuery called admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] run command admin.$cmd { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command: { replSetGetStatus: 1 }
m31200| Thu Jun 14 01:30:34 [conn10] command admin.$cmd command: { replSetGetStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:408 0ms
m31200| Thu Jun 14 01:30:34 [initandlisten] connection accepted from 10.255.119.66:35462 #11 (9 connections now open)
m31200| Thu Jun 14 01:30:34 [conn11] runQuery called admin.$cmd { getlasterror: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] run command admin.$cmd { getlasterror: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:83 0ms
m31200| Thu Jun 14 01:30:34 [conn11] runQuery called admin.$cmd { isdbgrid: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] run command admin.$cmd { isdbgrid: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] command admin.$cmd command: { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:99 0ms
m31200| Thu Jun 14 01:30:34 [conn11] runQuery called admin.$cmd { isMaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] run command admin.$cmd { isMaster: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:302 0ms
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:34 [conn] connected connection!
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs _checkStatus couldn't _find(domU-12-31-39-01-70-B4:31201)
m30999| Thu Jun 14 01:30:34 [conn] _check : test-rs1/domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31200 { setName: "test-rs1", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31200" ], arbiters: [ "domU-12-31-39-01-70-B4:31201" ], primary: "domU-12-31-39-01-70-B4:31200", me: "domU-12-31-39-01-70-B4:31200", maxBsonObjectSize: 16777216, localTime: new Date(1339651834486), ok: 1.0 }
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs _checkStatus couldn't _find(domU-12-31-39-01-70-B4:31201)
m30999| Thu Jun 14 01:30:34 [conn] Primary for replica set test-rs1 changed to domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31200 { setName: "test-rs1", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31200" ], arbiters: [ "domU-12-31-39-01-70-B4:31201" ], primary: "domU-12-31-39-01-70-B4:31200", me: "domU-12-31-39-01-70-B4:31200", maxBsonObjectSize: 16777216, localTime: new Date(1339651834487), ok: 1.0 }
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 [conn] dbclient_rs _checkStatus couldn't _find(domU-12-31-39-01-70-B4:31201)
m30999| Thu Jun 14 01:30:34 [conn] replica set monitor for replica set test-rs1 started, address is test-rs1/domU-12-31-39-01-70-B4:31200
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ReplicaSetMonitorWatcher
m30999| Thu Jun 14 01:30:34 [ReplicaSetMonitorWatcher] starting
m30999| Thu Jun 14 01:30:34 BackgroundJob starting: ConnectBG
m31200| Thu Jun 14 01:30:34 [conn11] runQuery called admin.$cmd { listDatabases: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] run command admin.$cmd { listDatabases: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] command: { listDatabases: 1 }
m31200| Thu Jun 14 01:30:34 [conn11] checking size file /data/db/test-rs1-0/local.ns
m31200| Thu Jun 14 01:30:34 [conn11] checking size file /data/db/test-rs1-0/admin.ns
m31200| Thu Jun 14 01:30:34 [conn11] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:26 reslen:176 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards { host: "test-rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201" }
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards query: { host: "test-rs1/domU-12-31-39-01-70-B4:31200,domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:914 w:2261 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:34 [conn5] insert config.shards keyUpdates:0 locks(micros) r:914 w:2313 0ms
m30999| Thu Jun 14 01:30:34 [conn] going to add shard: { _id: "test-rs1", host: "test-rs1/domU-12-31-39-01-70-B4:31200" }
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn5] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:34 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:914 w:2313 reslen:67 0ms
m29000| Thu Jun 14 01:30:34 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:34 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:941 w:2313 nreturned:2 reslen:154 0ms
{ "shardAdded" : "test-rs1", "ok" : 1 }
m29000| Thu Jun 14 01:30:34 [conn5] Socket recv() conn closed? 10.255.119.66:52423
m29000| Thu Jun 14 01:30:34 [conn3] Socket recv() conn closed? 10.255.119.66:52419
m29000| Thu Jun 14 01:30:34 [conn4] Socket recv() conn closed? 10.255.119.66:52422
m29000| Thu Jun 14 01:30:34 [conn5] SocketException: remote: 10.255.119.66:52423 error: 9001 socket exception [0] server [10.255.119.66:52423]
m29000| Thu Jun 14 01:30:34 [conn5] end connection 10.255.119.66:52423 (8 connections now open)
m29000| Thu Jun 14 01:30:34 [conn4] SocketException: remote: 10.255.119.66:52422 error: 9001 socket exception [0] server [10.255.119.66:52422]
m29000| Thu Jun 14 01:30:34 [conn4] end connection 10.255.119.66:52422 (8 connections now open)
m29000| Thu Jun 14 01:30:34 [conn3] SocketException: remote: 10.255.119.66:52419 error: 9001 socket exception [0] server [10.255.119.66:52419]
m29000| Thu Jun 14 01:30:34 [conn3] end connection 10.255.119.66:52419 (7 connections now open)
m29000| Thu Jun 14 01:30:34 [conn6] Socket recv() conn closed? 10.255.119.66:52424
m29000| Thu Jun 14 01:30:34 [conn6] SocketException: remote: 10.255.119.66:52424 error: 9001 socket exception [0] server [10.255.119.66:52424]
m29000| Thu Jun 14 01:30:34 [conn6] end connection 10.255.119.66:52424 (5 connections now open)
m30000| Thu Jun 14 01:30:34 [conn2] Socket recv() conn closed? 10.255.119.66:53490
m30000| Thu Jun 14 01:30:34 [conn2] SocketException: remote: 10.255.119.66:53490 error: 9001 socket exception [0] server [10.255.119.66:53490]
m30000| Thu Jun 14 01:30:34 [conn2] end connection 10.255.119.66:53490 (1 connection now open)
m31201| Thu Jun 14 01:30:34 [conn7] Socket recv() conn closed? 10.255.119.66:47847
m31201| Thu Jun 14 01:30:34 [conn7] SocketException: remote: 10.255.119.66:47847 error: 9001 socket exception [0] server [10.255.119.66:47847]
m31201| Thu Jun 14 01:30:34 [conn7] end connection 10.255.119.66:47847 (4 connections now open)
m30999| Thu Jun 14 01:30:34 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31200| Thu Jun 14 01:30:34 [conn11] Socket recv() conn closed? 10.255.119.66:35462
m31200| Thu Jun 14 01:30:34 [conn9] Socket recv() conn closed? 10.255.119.66:35458
m31200| Thu Jun 14 01:30:34 [conn11] SocketException: remote: 10.255.119.66:35462 error: 9001 socket exception [0] server [10.255.119.66:35462]
m31200| Thu Jun 14 01:30:34 [conn11] end connection 10.255.119.66:35462 (8 connections now open)
m31200| Thu Jun 14 01:30:34 [conn9] SocketException: remote: 10.255.119.66:35458 error: 9001 socket exception [0] server [10.255.119.66:35458]
m31200| Thu Jun 14 01:30:34 [conn9] end connection 10.255.119.66:35458 (8 connections now open)
m31200| Thu Jun 14 01:30:34 [conn10] Socket recv() conn closed? 10.255.119.66:35459
m31200| Thu Jun 14 01:30:34 [conn10] SocketException: remote: 10.255.119.66:35459 error: 9001 socket exception [0] server [10.255.119.66:35459]
m31200| Thu Jun 14 01:30:34 [conn10] end connection 10.255.119.66:35459 (6 connections now open)
m29000| Thu Jun 14 01:30:34 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.712 secs
m31201| Thu Jun 14 01:30:34 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:34 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:34 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:34 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:35 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:35 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:35 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:35 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
Thu Jun 14 01:30:35 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:30:35 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:30:35 [conn7] Socket recv() conn closed? 10.255.119.66:52427
m29000| Thu Jun 14 01:30:35 [conn7] SocketException: remote: 10.255.119.66:52427 error: 9001 socket exception [0] server [10.255.119.66:52427]
m29000| Thu Jun 14 01:30:35 [conn7] end connection 10.255.119.66:52427 (4 connections now open)
m29000| Thu Jun 14 01:30:35 [conn8] Socket recv() conn closed? 10.255.119.66:52428
m29000| Thu Jun 14 01:30:35 [conn8] SocketException: remote: 10.255.119.66:52428 error: 9001 socket exception [0] server [10.255.119.66:52428]
m29000| Thu Jun 14 01:30:35 [conn8] end connection 10.255.119.66:52428 (3 connections now open)
m29000| Thu Jun 14 01:30:35 [conn9] Socket recv() conn closed? 10.255.119.66:52429
m29000| Thu Jun 14 01:30:35 [conn9] SocketException: remote: 10.255.119.66:52429 error: 9001 socket exception [0] server [10.255.119.66:52429]
m29000| Thu Jun 14 01:30:35 [conn9] end connection 10.255.119.66:52429 (2 connections now open)
Thu Jun 14 01:30:36 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:30:36 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:30:36 [interruptThread] now exiting
m30000| Thu Jun 14 01:30:36 dbexit:
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:30:36 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:30:36 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:30:36 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:30:36 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:30:36 [interruptThread] mmf close
m30000| Thu Jun 14 01:30:36 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:30:36 [interruptThread] shutdown: groupCommitMutex
m30000| Thu Jun 14 01:30:36 dbexit: really exiting now
m31201| Thu Jun 14 01:30:36 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:36 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:36 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" }
m31201| Thu Jun 14 01:30:36 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31200" } ntoreturn:1 keyUpdates:0 reslen:124 0ms
m31200| Thu Jun 14 01:30:37 [conn3] runQuery called admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:37 [conn3] run command admin.$cmd { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:37 [conn3] command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" }
m31200| Thu Jun 14 01:30:37 [conn3] command admin.$cmd command: { replSetHeartbeat: "test-rs1", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31201" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:124 0ms
Thu Jun 14 01:30:37 shell: stopped mongo program on port 30000
Thu Jun 14 01:30:37 No db started on port: 30001
Thu Jun 14 01:30:37 shell: stopped mongo program on port 30001
ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
m31200| Thu Jun 14 01:30:37 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Thu Jun 14 01:30:37 [interruptThread] now exiting
m31200| Thu Jun 14 01:30:37 dbexit:
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: going to close listening sockets...
m31200| Thu Jun 14 01:30:37 [interruptThread] closing listening socket: 14
m31200| Thu Jun 14 01:30:37 [interruptThread] closing listening socket: 16
m31200| Thu Jun 14 01:30:37 [interruptThread] closing listening socket: 18
m31200| Thu Jun 14 01:30:37 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: going to flush diaglog...
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: going to close sockets...
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: closing all files...
m31200| Thu Jun 14 01:30:37 [interruptThread] mmf close /data/db/test-rs1-0/local.ns
m31200| Thu Jun 14 01:30:37 [interruptThread] mmf close /data/db/test-rs1-0/local.0
m31200| Thu Jun 14 01:30:37 [interruptThread] mmf close /data/db/test-rs1-0/admin.ns
m31200| Thu Jun 14 01:30:37 [interruptThread] mmf close /data/db/test-rs1-0/admin.0
m31200| Thu Jun 14 01:30:37 [interruptThread] closeAllFiles() finished
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: removing fs lock...
m31200| Thu Jun 14 01:30:37 [interruptThread] shutdown: groupCommitMutex
m31200| Thu Jun 14 01:30:37 dbexit: really exiting now
m31201| Thu Jun 14 01:30:37 [conn3] Socket recv() conn closed? 10.255.119.66:47818
m31201| Thu Jun 14 01:30:37 [conn3] SocketException: remote: 10.255.119.66:47818 error: 9001 socket exception [0] server [10.255.119.66:47818]
m31201| Thu Jun 14 01:30:37 [conn3] end connection 10.255.119.66:47818 (3 connections now open)
Thu Jun 14 01:30:38 shell: stopped mongo program on port 31200
ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number
ReplSetTest stop *** Shutting down mongod in port 31201 ***
m31201| Thu Jun 14 01:30:38 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Thu Jun 14 01:30:38 [interruptThread] now exiting
m31201| Thu Jun 14 01:30:38 dbexit:
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: going to close listening sockets...
m31201| Thu Jun 14 01:30:38 [interruptThread] closing listening socket: 19
m31201| Thu Jun 14 01:30:38 [interruptThread] closing listening socket: 20
m31201| Thu Jun 14 01:30:38 [interruptThread] closing listening socket: 21
m31201| Thu Jun 14 01:30:38 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: going to flush diaglog...
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: going to close sockets...
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: closing all files...
m31201| Thu Jun 14 01:30:38 [interruptThread] mmf close /data/db/test-rs1-1/local.ns
m31201| Thu Jun 14 01:30:38 [interruptThread] mmf close /data/db/test-rs1-1/local.0
m31201| Thu Jun 14 01:30:38 [interruptThread] closeAllFiles() finished
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: removing fs lock...
m31201| Thu Jun 14 01:30:38 [interruptThread] shutdown: groupCommitMutex
m31201| Thu Jun 14 01:30:38 dbexit: really exiting now
Thu Jun 14 01:30:39 shell: stopped mongo program on port 31201
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:30:39 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:30:39 [interruptThread] now exiting
m29000| Thu Jun 14 01:30:39 dbexit:
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:30:39 [interruptThread] closing listening socket: 25
m29000| Thu Jun 14 01:30:39 [interruptThread] closing listening socket: 26
m29000| Thu Jun 14 01:30:39 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:30:39 [interruptThread] mmf close
m29000| Thu Jun 14 01:30:39 [interruptThread] mmf close /data/db/test-config0/config.ns
m29000| Thu Jun 14 01:30:39 [interruptThread] mmf close /data/db/test-config0/config.0
m29000| Thu Jun 14 01:30:39 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:30:39 [interruptThread] shutdown: groupCommitMutex
m29000| Thu Jun 14 01:30:39 dbexit: really exiting now
Thu Jun 14 01:30:40 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 27.937 seconds ***
Resetting db path '/data/db/test0'
Thu Jun 14 01:30:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0 -vvvv
m30000| Thu Jun 14 01:30:40
m30000| Thu Jun 14 01:30:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:30:40
m30000| Thu Jun 14 01:30:40 BackgroundJob starting: DataFileSync
m30000| Thu Jun 14 01:30:40 versionCmpTest passed
m30000| Thu Jun 14 01:30:40 versionArrayTest passed
m30000| Thu Jun 14 01:30:40 isInRangeTest passed
m30000| Thu Jun 14 01:30:40 shardKeyTest passed
m30000| Thu Jun 14 01:30:40 shardObjTest passed
m30000| Thu Jun 14 01:30:40 [initandlisten] MongoDB starting : pid=23594 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:30:40 [initandlisten]
m30000| Thu Jun 14 01:30:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:30:40 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:30:40 [initandlisten]
m30000| Thu Jun 14 01:30:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:30:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:30:40 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:30:40 [initandlisten]
m30000| Thu Jun 14 01:30:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:30:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:30:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:30:40 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, vvvv: true }
m30000| Thu Jun 14 01:30:40 [initandlisten] flushing directory /data/db/test0
m30000| Thu Jun 14 01:30:40 [initandlisten] opening db: local
m30000| Thu Jun 14 01:30:40 [initandlisten] enter repairDatabases (to check pdfile version #)
m30000| Thu Jun 14 01:30:40 [initandlisten] done repairDatabases
m30000| Thu Jun 14 01:30:40 BackgroundJob starting: snapshot
m30000| Thu Jun 14 01:30:40 BackgroundJob starting: ClientCursorMonitor
m30000| Thu Jun 14 01:30:40 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:30:40 BackgroundJob starting: TTLMonitor
m30000| Thu Jun 14 01:30:40 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m30000| Thu Jun 14 01:30:40 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:30:40 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30000| Thu Jun 14 01:30:40 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:30:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1 -vvvvv
m30000| Thu Jun 14 01:30:40 [initandlisten] connection accepted from 127.0.0.1:39371 #1 (1 connection now open)
m30001| Thu Jun 14 01:30:40
m30001| Thu Jun 14 01:30:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:30:40
m30001| Thu Jun 14 01:30:40 BackgroundJob starting: DataFileSync
m30001| Thu Jun 14 01:30:40 versionCmpTest passed
m30001| Thu Jun 14 01:30:40 versionArrayTest passed
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcdef: "z23456789" }
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
m30001| Thu Jun 14 01:30:40 Matcher::matches() { abcdef: "z23456789" }
m30001| Thu Jun 14 01:30:40 isInRangeTest passed
m30001| Thu Jun 14 01:30:40 shardKeyTest passed
m30001| Thu Jun 14 01:30:40 shardObjTest passed
m30001| Thu Jun 14 01:30:40 [initandlisten] MongoDB starting : pid=23607 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:30:40 [initandlisten]
m30001| Thu Jun 14 01:30:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:30:40 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:30:40 [initandlisten]
m30001| Thu Jun 14 01:30:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:30:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:30:40 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:30:40 [initandlisten]
m30001| Thu Jun 14 01:30:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:30:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:30:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:30:40 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001, vvvvv: true }
m30001| Thu Jun 14 01:30:40 [initandlisten] flushing directory /data/db/test1
m30001| Thu Jun 14 01:30:40 [initandlisten] opening db: local
m30001| Thu Jun 14 01:30:40 [initandlisten] enter repairDatabases (to check pdfile version #)
m30001| Thu Jun 14 01:30:40 [initandlisten] done repairDatabases
m30001| Thu Jun 14 01:30:40 BackgroundJob starting: snapshot
m30001| Thu Jun 14 01:30:40 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m30001| Thu Jun 14 01:30:40 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:30:40 BackgroundJob starting: ClientCursorMonitor
m30001| Thu Jun 14 01:30:40 BackgroundJob starting: PeriodicTask::Runner
m30001| Thu Jun 14 01:30:40 BackgroundJob starting: TTLMonitor
m30001| Thu Jun 14 01:30:40 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30001| Thu Jun 14 01:30:40 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:30:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0 -vvv
m30001| Thu Jun 14 01:30:40 [initandlisten] connection accepted from 127.0.0.1:59263 #1 (1 connection now open)
m29000| Thu Jun 14 01:30:41
m29000| Thu Jun 14 01:30:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:30:41
m29000| Thu Jun 14 01:30:41 BackgroundJob starting: DataFileSync
m29000| Thu Jun 14 01:30:41 versionCmpTest passed
m29000| Thu Jun 14 01:30:41 versionArrayTest passed
m29000| Thu Jun 14 01:30:41 isInRangeTest passed
m29000| Thu Jun 14 01:30:41 shardKeyTest passed
m29000| Thu Jun 14 01:30:41 shardObjTest passed
m29000| Thu Jun 14 01:30:41 [initandlisten] MongoDB starting : pid=23619 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:30:41 [initandlisten]
m29000| Thu Jun 14 01:30:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:30:41 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:30:41 [initandlisten]
m29000| Thu Jun 14 01:30:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:30:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:30:41 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:30:41 [initandlisten]
m29000| Thu Jun 14 01:30:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:30:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:30:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:30:41 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000, vvv: true }
m29000| Thu Jun 14 01:30:41 [initandlisten] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:41 [initandlisten] opening db: local
m29000| Thu Jun 14 01:30:41 [initandlisten] enter repairDatabases (to check pdfile version #)
m29000| Thu Jun 14 01:30:41 [initandlisten] done repairDatabases
m29000| Thu Jun 14 01:30:41 BackgroundJob starting: snapshot
m29000| Thu Jun 14 01:30:41 [initandlisten] fd limit hard:1024 soft:1024 max conn: 819
m29000| Thu Jun 14 01:30:41 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:30:41 BackgroundJob starting: ClientCursorMonitor
m29000| Thu Jun 14 01:30:41 BackgroundJob starting: PeriodicTask::Runner
m29000| Thu Jun 14 01:30:41 BackgroundJob starting: TTLMonitor
m29000| Thu Jun 14 01:30:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m29000| Thu Jun 14 01:30:41 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:30:41 [websvr] ERROR: addr already in use
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40800 #1 (1 connection now open)
"localhost:29000"
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40801 #2 (2 connections now open)
m29000| Thu Jun 14 01:30:41 [conn2] opening db: config
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m29000| Thu Jun 14 01:30:41 [conn2] mmf create /data/db/test-config0/config.ns
m29000| Thu Jun 14 01:30:41 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:30:41 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29000| Thu Jun 14 01:30:41 [FileAllocator] flushing directory /data/db/test-config0
Thu Jun 14 01:30:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000 -v
m30999| Thu Jun 14 01:30:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:30:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23635 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:30:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:30:41 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:30:41 [mongosMain] options: { configdb: "localhost:29000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:30:41 [mongosMain] config string : localhost:29000
m30999| Thu Jun 14 01:30:41 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:41 [mongosMain] connected connection!
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40802 #3 (3 connections now open)
m29000| Thu Jun 14 01:30:41 [FileAllocator] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:41 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.271 secs
m29000| Thu Jun 14 01:30:41 [conn2] mmf finishOpening 0xb185e000 /data/db/test-config0/config.ns len:16777216
m29000| Thu Jun 14 01:30:41 [conn2] mmf create /data/db/test-config0/config.0
m29000| Thu Jun 14 01:30:41 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:30:41 [FileAllocator] flushing directory /data/db/test-config0
m29000| Thu Jun 14 01:30:41 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.248 secs
m29000| Thu Jun 14 01:30:41 [conn2] mmf finishOpening 0xb085e000 /data/db/test-config0/config.0 len:16777216
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:30:41 [CheckConfigServers] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:41 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:30:41 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:41 [mongosMain] connected connection!
m30999| Thu Jun 14 01:30:41 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:30:41 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:41 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:30:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:41 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:30:41 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:30:41 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:30:41 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:30:41
m30999| Thu Jun 14 01:30:41 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:41 [Balancer] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:41 [Balancer] connected connection!
m30999| Thu Jun 14 01:30:41 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:30:41 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339651841:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:30:41 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:30:41 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30999:1339651841:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:30:41 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:30:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651841:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651841:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651841:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:41 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97701a35db917513e04e2" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:30:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651841:1804289383' acquired, ts : 4fd97701a35db917513e04e2
m30999| Thu Jun 14 01:30:41 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:30:41 [Balancer] no collections to balance
m30999| Thu Jun 14 01:30:41 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:30:41 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:30:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651841:1804289383' unlocked.
m29000| Thu Jun 14 01:30:41 [conn2] mmf close
m29000| Thu Jun 14 01:30:41 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:30:41 [conn2] allocExtent config.settings size 2304 0
m29000| Thu Jun 14 01:30:41 [conn2] adding _id index for collection config.settings
m29000| Thu Jun 14 01:30:41 [conn2] allocExtent config.system.indexes size 3840 0
m29000| Thu Jun 14 01:30:41 [conn2] New namespace: config.system.indexes
m29000| Thu Jun 14 01:30:41 [conn2] allocExtent config.system.namespaces size 2304 0
m29000| Thu Jun 14 01:30:41 [conn2] New namespace: config.system.namespaces
m29000| Thu Jun 14 01:30:41 [conn2] build index config.settings { _id: 1 }
m29000| mem info: before index start vsize: 142 resident: 31 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn2] external sort root: /data/db/test-config0/_tmp/esort.1339651841.0/
m29000| mem info: before final sort vsize: 142 resident: 31 mapped: 32
m29000| mem info: after final sort vsize: 142 resident: 31 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn2] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn2] allocExtent config.settings.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn2] New namespace: config.settings.$_id_
m29000| Thu Jun 14 01:30:41 [conn2] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn2] New namespace: config.settings
m29000| Thu Jun 14 01:30:41 [conn2] insert config.settings keyUpdates:0 locks(micros) w:537031 536ms
m29000| Thu Jun 14 01:30:41 [conn3] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:146 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:146 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:764 w:146 reslen:203 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.version {}
m29000| Thu Jun 14 01:30:41 [conn3] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:866 w:146 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.$cmd { count: "shards", query: {} }
m29000| Thu Jun 14 01:30:41 [conn3] run command config.$cmd { count: "shards", query: {} }
m29000| Thu Jun 14 01:30:41 [conn3] command config.$cmd command: { count: "shards", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:926 w:146 reslen:58 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.$cmd { count: "databases", query: {} }
m29000| Thu Jun 14 01:30:41 [conn3] run command config.$cmd { count: "databases", query: {} }
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40806 #4 (4 connections now open)
m29000| Thu Jun 14 01:30:41 [conn3] command config.$cmd command: { count: "databases", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1061 w:146 reslen:58 0ms
m29000| Thu Jun 14 01:30:41 [conn4] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:97 0ms
m29000| Thu Jun 14 01:30:41 [conn4] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40807 #5 (5 connections now open)
m29000| Thu Jun 14 01:30:41 [conn4] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn4] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:97 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn5] allocExtent config.version size 1536 0
m29000| Thu Jun 14 01:30:41 [conn5] adding _id index for collection config.version
m29000| Thu Jun 14 01:30:41 [conn5] build index config.version { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn5] external sort root: /data/db/test-config0/_tmp/esort.1339651841.1/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn5] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn5] allocExtent config.version.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn5] New namespace: config.version.$_id_
m29000| Thu Jun 14 01:30:41 [conn5] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn5] New namespace: config.version
m29000| Thu Jun 14 01:30:41 [conn5] insert config.version keyUpdates:0 locks(micros) w:990 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called admin.$cmd { ismaster: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] run command admin.$cmd { ismaster: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1061 w:146 reslen:90 0ms
m29000| Thu Jun 14 01:30:41 [conn4] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn4] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn4] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:636 w:97 reslen:249 0ms
m29000| Thu Jun 14 01:30:41 [conn5] runQuery called config.version {}
m29000| Thu Jun 14 01:30:41 [conn5] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:57 w:990 nreturned:1 reslen:47 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.settings {}
m29000| Thu Jun 14 01:30:41 [conn3] query config.settings ntoreturn:0 keyUpdates:0 locks(micros) r:1081 w:146 nreturned:1 reslen:59 0ms
m29000| Thu Jun 14 01:30:41 [conn3] create collection config.chunks {}
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.chunks size 8192 0
m29000| Thu Jun 14 01:30:41 [conn3] adding _id index for collection config.chunks
m29000| Thu Jun 14 01:30:41 [conn3] build index config.chunks { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.2/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.chunks.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.chunks.$_id_
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.chunks
m29000| Thu Jun 14 01:30:41 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:30:41 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.3/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.chunks.$ns_1_min_1 size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.chunks.$ns_1_min_1
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1081 w:1729 1ms
m29000| Thu Jun 14 01:30:41 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:134 w:990 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.4/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.chunks.$ns_1_shard_1_min_1 size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.chunks.$ns_1_shard_1_min_1
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1081 w:2361 0ms
m29000| Thu Jun 14 01:30:41 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.5/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.chunks.$ns_1_lastmod_1 size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.chunks.$ns_1_lastmod_1
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1081 w:2992 0ms
m29000| Thu Jun 14 01:30:41 [conn3] create collection config.shards {}
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.shards size 8192 0
m29000| Thu Jun 14 01:30:41 [conn3] adding _id index for collection config.shards
m29000| Thu Jun 14 01:30:41 [conn3] build index config.shards { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.6/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.shards.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.shards.$_id_
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.shards
m29000| Thu Jun 14 01:30:41 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:30:41 [conn3] build index config.shards { host: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.7/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.shards.$host_1 size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.shards.$host_1
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1081 w:4372 1ms
m29000| Thu Jun 14 01:30:41 [conn5] allocExtent config.mongos size 4608 0
m29000| Thu Jun 14 01:30:41 [conn5] adding _id index for collection config.mongos
m29000| Thu Jun 14 01:30:41 [conn5] build index config.mongos { _id: 1 }
m29000| mem info: before index start vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn5] external sort root: /data/db/test-config0/_tmp/esort.1339651841.8/
m29000| mem info: before final sort vsize: 145 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 145 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn5] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn5] allocExtent config.mongos.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn5] New namespace: config.mongos.$_id_
m29000| Thu Jun 14 01:30:41 [conn5] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn5] New namespace: config.mongos
m29000| Thu Jun 14 01:30:41 [conn5] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30999" } update: { $set: { ping: new Date(1339651841739), up: 0, waiting: false } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:134 w:2174 1ms
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40808 #6 (6 connections now open)
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:41 [conn6] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:40 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.settings { _id: "chunksize" }
m29000| Thu Jun 14 01:30:41 [conn6] query config.settings query: { _id: "chunksize" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:67 reslen:59 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.settings { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn6] query config.settings query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:81 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:101 reslen:1625 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:118 reslen:1713 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:136 reslen:1713 0ms
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.lockpings size 4864 0
m29000| Thu Jun 14 01:30:41 [conn3] adding _id index for collection config.lockpings
m29000| Thu Jun 14 01:30:41 [conn3] build index config.lockpings { _id: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.9/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.lockpings.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.lockpings.$_id_
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.lockpings
m29000| Thu Jun 14 01:30:41 [conn3] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30999:1339651841:1804289383" } update: { $set: { ping: new Date(1339651841743) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:1081 w:5425 1ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1081 w:5425 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called config.locks {}
m29000| Thu Jun 14 01:30:41 [conn3] query config.locks ntoreturn:0 keyUpdates:0 locks(micros) r:1165 w:5425 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn3] remove config.lockpings query: { _id: { $nin: {} }, ping: { $lt: new Date(1339306241743) } } keyUpdates:0 locks(micros) r:1165 w:5587 0ms
m29000| Thu Jun 14 01:30:41 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1165 w:5587 reslen:67 0ms
m29000| Thu Jun 14 01:30:41 [conn3] build index config.lockpings { ping: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651841.10/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] not using file. size:31 _compares:0
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] allocExtent config.lockpings.$ping_1 size 36864 0
m29000| Thu Jun 14 01:30:41 [conn3] New namespace: config.lockpings.$ping_1
m29000| Thu Jun 14 01:30:41 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn3] build index done. scanned 1 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn3] insert config.system.indexes keyUpdates:0 locks(micros) r:1165 w:6315 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:181 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn6] allocExtent config.locks size 2816 0
m29000| Thu Jun 14 01:30:41 [conn6] adding _id index for collection config.locks
m29000| Thu Jun 14 01:30:41 [conn6] build index config.locks { _id: 1 }
m29000| mem info: before index start vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn6] external sort root: /data/db/test-config0/_tmp/esort.1339651841.11/
m29000| mem info: before final sort vsize: 146 resident: 32 mapped: 32
m29000| mem info: after final sort vsize: 146 resident: 32 mapped: 32
m29000| Thu Jun 14 01:30:41 [conn6] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:41 [conn6] allocExtent config.locks.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:41 [conn6] New namespace: config.locks.$_id_
m29000| Thu Jun 14 01:30:41 [conn6] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:41 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:41 [conn6] New namespace: config.locks
m29000| Thu Jun 14 01:30:41 [conn6] insert config.locks keyUpdates:0 locks(micros) r:181 w:781 0ms
m29000| Thu Jun 14 01:30:41 [conn6] running multiple plans
m29000| Thu Jun 14 01:30:41 [conn6] update config.locks query: { _id: "balancer", state: 0 } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30999:1339651841:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30999:1339651841:1804289383", when: new Date(1339651841747), why: "doing balance round", ts: ObjectId('4fd97701a35db917513e04e2') } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) r:181 w:1123 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:181 w:1123 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:201 w:1123 reslen:256 0ms
m29000| Thu Jun 14 01:30:41 [conn6] update config.locks query: { _id: "balancer" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30999:1339651841:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30999:1339651841:1804289383", when: new Date(1339651841747), why: "doing balance round", ts: ObjectId('4fd97701a35db917513e04e2') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:201 w:1173 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:201 w:1173 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn6] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:216 w:1173 reslen:256 0ms
m29000| Thu Jun 14 01:30:41 [conn5] runQuery called config.collections {}
m29000| Thu Jun 14 01:30:41 [conn5] query config.collections ntoreturn:0 keyUpdates:0 locks(micros) r:196 w:2174 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn6] running multiple plans
m29000| Thu Jun 14 01:30:41 [conn6] update config.locks query: { _id: "balancer", ts: ObjectId('4fd97701a35db917513e04e2') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:216 w:1310 0ms
m29000| Thu Jun 14 01:30:41 [conn6] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn6] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:216 w:1310 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn5] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30999" } update: { $set: { ping: new Date(1339651841748), up: 0, waiting: true } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:196 w:2210 0ms
Thu Jun 14 01:30:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:29000 -vv
m30999| Thu Jun 14 01:30:41 [mongosMain] connection accepted from 127.0.0.1:51471 #1 (1 connection now open)
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40810 #7 (7 connections now open)
m30998| Thu Jun 14 01:30:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:30:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23655 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:30:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:30:41 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:30:41 [mongosMain] options: { configdb: "localhost:29000", port: 30998, vv: true }
m30998| Thu Jun 14 01:30:41 [mongosMain] config string : localhost:29000
m30998| Thu Jun 14 01:30:41 [mongosMain] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:41 [conn7] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:131 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:131 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1062 w:131 reslen:476 1ms
m30998| Thu Jun 14 01:30:41 [mongosMain] connected connection!
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: CheckConfigServers
m29000| Thu Jun 14 01:30:41 [conn7] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) r:1062 w:226 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1062 w:226 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command config.$cmd { dbhash: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command config.$cmd command: { dbhash: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1841 w:226 reslen:476 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called config.version {}
m29000| Thu Jun 14 01:30:41 [conn7] query config.version ntoreturn:0 keyUpdates:0 locks(micros) r:1898 w:226 nreturned:1 reslen:47 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called config.settings {}
m29000| Thu Jun 14 01:30:41 [conn7] query config.settings ntoreturn:0 keyUpdates:0 locks(micros) r:1917 w:226 nreturned:1 reslen:59 0ms
m29000| Thu Jun 14 01:30:41 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1917 w:248 0ms
m29000| Thu Jun 14 01:30:41 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1917 w:262 0ms
m29000| Thu Jun 14 01:30:41 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1917 w:276 0ms
m29000| Thu Jun 14 01:30:41 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1917 w:316 0ms
m30998| Thu Jun 14 01:30:41 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:30:41 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:41 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:30:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:41 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:30:41 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:30:41 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40812 #8 (8 connections now open)
m30998| Thu Jun 14 01:30:41 [Balancer] connected connection!
m29000| Thu Jun 14 01:30:41 [conn8] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:41 [conn8] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:32 nreturned:0 reslen:20 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:30:41 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:30:41
m30998| Thu Jun 14 01:30:41 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:30:41 [conn8] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30998" } update: { $set: { ping: new Date(1339651841829), up: 0, waiting: false } } nscanned:0 idhack:1 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:32 w:137 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:30:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:30:41 [initandlisten] connection accepted from 127.0.0.1:40813 #9 (9 connections now open)
m30998| Thu Jun 14 01:30:41 [Balancer] connected connection!
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:41 [conn9] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:27 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.settings { _id: "chunksize" }
m29000| Thu Jun 14 01:30:41 [conn9] query config.settings query: { _id: "chunksize" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:50 reslen:59 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] Refreshing MaxChunkSize: 50
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.settings { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn9] query config.settings query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:63 reslen:20 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:80 reslen:1713 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] skew from remote server localhost:29000 found: 0
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:95 reslen:1713 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] skew from remote server localhost:29000 found: 0
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { serverStatus: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:110 reslen:1713 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] skew from remote server localhost:29000 found: 0
m30998| Thu Jun 14 01:30:41 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30998| Thu Jun 14 01:30:41 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30998:1339651841:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:30:41 [conn7] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30998:1339651841:1804289383" } update: { $set: { ping: new Date(1339651841832) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) r:1917 w:412 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1917 w:412 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called config.locks {}
m29000| Thu Jun 14 01:30:41 [conn7] query config.locks ntoreturn:0 keyUpdates:0 locks(micros) r:1965 w:412 nreturned:1 reslen:256 0ms
m29000| Thu Jun 14 01:30:41 [conn7] remove config.lockpings query: { _id: { $nin: [ "domU-12-31-39-01-70-B4:30999:1339651841:1804289383" ] }, ping: { $lt: new Date(1339306241832) } } keyUpdates:0 locks(micros) r:1965 w:569 0ms
m29000| Thu Jun 14 01:30:41 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1965 w:569 reslen:67 0ms
m29000| Thu Jun 14 01:30:41 [conn7] insert config.system.indexes keyUpdates:0 locks(micros) r:1965 w:587 0ms
m30998| Thu Jun 14 01:30:41 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:30:41 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30998:1339651841:1804289383', sleeping for 30000ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:127 reslen:256 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651841:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651841:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651841:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:30:41 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd977018b421d406336ba96" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd97701a35db917513e04e2" } }
m29000| Thu Jun 14 01:30:41 [conn9] running multiple plans
m29000| Thu Jun 14 01:30:41 [conn9] update config.locks query: { _id: "balancer", state: 0, ts: ObjectId('4fd97701a35db917513e04e2') } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30998:1339651841:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30998:1339651841:1804289383", when: new Date(1339651841833), why: "doing balance round", ts: ObjectId('4fd977018b421d406336ba96') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:127 w:189 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:127 w:189 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:142 w:189 reslen:256 0ms
m29000| Thu Jun 14 01:30:41 [conn9] update config.locks query: { _id: "balancer" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30998:1339651841:1804289383:Balancer:846930886", process: "domU-12-31-39-01-70-B4:30998:1339651841:1804289383", when: new Date(1339651841833), why: "doing balance round", ts: ObjectId('4fd977018b421d406336ba96') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:142 w:235 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:142 w:235 reslen:85 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called config.locks { _id: "balancer" }
m29000| Thu Jun 14 01:30:41 [conn9] query config.locks query: { _id: "balancer" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:156 w:235 reslen:256 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651841:1804289383' acquired, ts : 4fd977018b421d406336ba96
m30998| Thu Jun 14 01:30:41 [Balancer] *** start balancing round
m29000| Thu Jun 14 01:30:41 [conn8] runQuery called config.collections {}
m29000| Thu Jun 14 01:30:41 [conn8] query config.collections ntoreturn:0 keyUpdates:0 locks(micros) r:110 w:137 nreturned:0 reslen:20 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] no collections to balance
m30998| Thu Jun 14 01:30:41 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:30:41 [Balancer] *** end of balancing round
m29000| Thu Jun 14 01:30:41 [conn9] running multiple plans
m29000| Thu Jun 14 01:30:41 [conn9] update config.locks query: { _id: "balancer", ts: ObjectId('4fd977018b421d406336ba96') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:156 w:379 0ms
m29000| Thu Jun 14 01:30:41 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:41 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:156 w:379 reslen:85 0ms
m30998| Thu Jun 14 01:30:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651841:1804289383' unlocked.
m29000| Thu Jun 14 01:30:41 [conn8] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30998" } update: { $set: { ping: new Date(1339651841836), up: 0, waiting: true } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:110 w:170 0ms
m29000| Thu Jun 14 01:30:41 [FileAllocator] flushing directory /data/db/test-config0
m30998| Thu Jun 14 01:30:42 [mongosMain] connection accepted from 127.0.0.1:35808 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:30:42 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:30:42 [conn3] runQuery called config.databases { _id: "admin" }
m29000| Thu Jun 14 01:30:42 [conn3] query config.databases query: { _id: "admin" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1240 w:6315 reslen:20 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.databases { _id: /^admin$/i }
m29000| Thu Jun 14 01:30:42 [conn5] query config.databases query: { _id: /^admin$/i } ntoreturn:1 keyUpdates:0 locks(micros) r:360 w:2210 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:42 [conn3] allocExtent config.databases size 3328 0
m29000| Thu Jun 14 01:30:42 [conn3] adding _id index for collection config.databases
m29000| Thu Jun 14 01:30:42 [conn3] build index config.databases { _id: 1 }
m29000| mem info: before index start vsize: 149 resident: 33 mapped: 32
m29000| Thu Jun 14 01:30:42 [conn3] external sort root: /data/db/test-config0/_tmp/esort.1339651842.12/
m29000| mem info: before final sort vsize: 149 resident: 33 mapped: 32
m29000| mem info: after final sort vsize: 149 resident: 33 mapped: 32
m29000| Thu Jun 14 01:30:42 [conn3] external sort used : 0 files in 0 secs
m29000| Thu Jun 14 01:30:42 [conn3] allocExtent config.databases.$_id_ size 36864 0
m29000| Thu Jun 14 01:30:42 [conn3] New namespace: config.databases.$_id_
m29000| Thu Jun 14 01:30:42 [conn3] done building bottom layer, going to commit
m29000| Thu Jun 14 01:30:42 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:30:42 [conn3] New namespace: config.databases
m29000| Thu Jun 14 01:30:42 [conn3] update config.databases query: { _id: "admin" } update: { _id: "admin", partitioned: false, primary: "config" } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1240 w:7473 1ms
m29000| Thu Jun 14 01:30:42 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn3] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1240 w:7473 reslen:85 0ms
m30999| Thu Jun 14 01:30:42 [conn] put [admin] on: config:localhost:29000
m30999| Thu Jun 14 01:30:42 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:42 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:42 [conn] connected connection!
m30000| Thu Jun 14 01:30:42 [initandlisten] connection accepted from 127.0.0.1:39390 #2 (2 connections now open)
m30000| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:67 0ms
m30000| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { isdbgrid: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { isdbgrid: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:99 0ms
m30000| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { isMaster: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { isMaster: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:90 0ms
m30000| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { listDatabases: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { listDatabases: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] command: { listDatabases: 1 }
m30000| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:14 reslen:124 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards { query: { _id: /^shard/ }, orderby: { _id: -1 } }
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards query: { query: { _id: /^shard/ }, orderby: { _id: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:542 w:2210 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards { host: "localhost:30000" }
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards query: { host: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:618 w:2210 nreturned:0 reslen:20 0ms
m30999| Thu Jun 14 01:30:42 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m29000| Thu Jun 14 01:30:42 [conn5] insert config.shards keyUpdates:0 locks(micros) r:618 w:2257 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn5] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:618 w:2257 reslen:67 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:642 w:2257 nreturned:1 reslen:70 0ms
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:30:42 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:42 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:42 [initandlisten] connection accepted from 127.0.0.1:59281 #2 (2 connections now open)
m30001| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:67 0ms
m30999| Thu Jun 14 01:30:42 [conn] connected connection!
m30001| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { isdbgrid: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { isdbgrid: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:99 0ms
m30001| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { isMaster: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { isMaster: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:90 0ms
m30001| Thu Jun 14 01:30:42 [conn2] runQuery called admin.$cmd { listDatabases: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] run command admin.$cmd { listDatabases: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] command: { listDatabases: 1 }
m30001| Thu Jun 14 01:30:42 [conn2] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:12 reslen:124 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards { query: { _id: /^shard/ }, orderby: { _id: -1 } }
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards query: { query: { _id: /^shard/ }, orderby: { _id: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:786 w:2257 nreturned:1 reslen:70 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards { host: "localhost:30001" }
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards query: { host: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) r:848 w:2257 nreturned:0 reslen:20 0ms
m29000| Thu Jun 14 01:30:42 [conn5] insert config.shards keyUpdates:0 locks(micros) r:848 w:2348 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn5] run command admin.$cmd { getlasterror: 1 }
m29000| Thu Jun 14 01:30:42 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:848 w:2348 reslen:67 0ms
m29000| Thu Jun 14 01:30:42 [conn5] runQuery called config.shards {}
m29000| Thu Jun 14 01:30:42 [conn5] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:868 w:2348 nreturned:2 reslen:120 0ms
m30999| Thu Jun 14 01:30:42 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:30:42 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:30:42 [conn3] Socket recv() conn closed? 127.0.0.1:40802
m29000| Thu Jun 14 01:30:42 [conn3] SocketException: remote: 127.0.0.1:40802 error: 9001 socket exception [0] server [127.0.0.1:40802]
m29000| Thu Jun 14 01:30:42 [conn3] end connection 127.0.0.1:40802 (8 connections now open)
m29000| Thu Jun 14 01:30:42 [conn4] Socket recv() conn closed? 127.0.0.1:40806
m29000| Thu Jun 14 01:30:42 [conn4] SocketException: remote: 127.0.0.1:40806 error: 9001 socket exception [0] server [127.0.0.1:40806]
m29000| Thu Jun 14 01:30:42 [conn4] end connection 127.0.0.1:40806 (7 connections now open)
m29000| Thu Jun 14 01:30:42 [conn5] Socket recv() conn closed? 127.0.0.1:40807
m29000| Thu Jun 14 01:30:42 [conn5] SocketException: remote: 127.0.0.1:40807 error: 9001 socket exception [0] server [127.0.0.1:40807]
m29000| Thu Jun 14 01:30:42 [conn5] end connection 127.0.0.1:40807 (6 connections now open)
m29000| Thu Jun 14 01:30:42 [conn6] Socket recv() conn closed? 127.0.0.1:40808
Thu Jun 14 01:30:42 shell: stopped mongo program on port 30999
m30001| Thu Jun 14 01:30:42 [conn2] Socket recv() conn closed? 127.0.0.1:59281
m30000| Thu Jun 14 01:30:42 [conn2] Socket recv() conn closed? 127.0.0.1:39390
m30000| Thu Jun 14 01:30:42 [conn2] SocketException: remote: 127.0.0.1:39390 error: 9001 socket exception [0] server [127.0.0.1:39390]
m30000| Thu Jun 14 01:30:42 [conn2] end connection 127.0.0.1:39390 (1 connection now open)
m30001| Thu Jun 14 01:30:42 [conn2] SocketException: remote: 127.0.0.1:59281 error: 9001 socket exception [0] server [127.0.0.1:59281]
m30001| Thu Jun 14 01:30:42 [conn2] end connection 127.0.0.1:59281 (1 connection now open)
m29000| Thu Jun 14 01:30:42 [conn6] SocketException: remote: 127.0.0.1:40808 error: 9001 socket exception [0] server [127.0.0.1:40808]
m29000| Thu Jun 14 01:30:42 [conn6] end connection 127.0.0.1:40808 (5 connections now open)
m30998| Thu Jun 14 01:30:42 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:30:42 [conn7] Socket recv() conn closed? 127.0.0.1:40810
m29000| Thu Jun 14 01:30:42 [conn7] SocketException: remote: 127.0.0.1:40810 error: 9001 socket exception [0] server [127.0.0.1:40810]
m29000| Thu Jun 14 01:30:42 [conn7] end connection 127.0.0.1:40810 (4 connections now open)
m29000| Thu Jun 14 01:30:42 [conn8] Socket recv() conn closed? 127.0.0.1:40812
m29000| Thu Jun 14 01:30:42 [conn8] SocketException: remote: 127.0.0.1:40812 error: 9001 socket exception [0] server [127.0.0.1:40812]
m29000| Thu Jun 14 01:30:42 [conn8] end connection 127.0.0.1:40812 (3 connections now open)
m29000| Thu Jun 14 01:30:42 [conn9] Socket recv() conn closed? 127.0.0.1:40813
m29000| Thu Jun 14 01:30:42 [conn9] SocketException: remote: 127.0.0.1:40813 error: 9001 socket exception [0] server [127.0.0.1:40813]
m29000| Thu Jun 14 01:30:42 [conn9] end connection 127.0.0.1:40813 (2 connections now open)
m29000| Thu Jun 14 01:30:42 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.572 secs
Thu Jun 14 01:30:43 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:30:43 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:30:43 [interruptThread] now exiting
m30000| Thu Jun 14 01:30:43 dbexit:
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:30:43 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:30:43 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:30:43 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:30:43 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:30:43 [interruptThread] mmf close
m30000| Thu Jun 14 01:30:43 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:30:43 [interruptThread] shutdown: groupCommitMutex
m30000| Thu Jun 14 01:30:43 dbexit: really exiting now
Thu Jun 14 01:30:43 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:30:43 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31200
Thu Jun 14 01:30:43 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31200 failed couldn't connect to server domU-12-31-39-01-70-B4:31200
Thu Jun 14 01:30:44 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:30:44 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:30:44 [interruptThread] now exiting
m30001| Thu Jun 14 01:30:44 dbexit:
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:30:44 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:30:44 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:30:44 [interruptThread] closing listening socket: 26
m30001| Thu Jun 14 01:30:44 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:30:44 [interruptThread] mmf close
m30001| Thu Jun 14 01:30:44 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:30:44 [interruptThread] shutdown: groupCommitMutex
m30001| Thu Jun 14 01:30:44 dbexit: really exiting now
Thu Jun 14 01:30:44 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31200 socket exception
Thu Jun 14 01:30:45 shell: stopped mongo program on port 30001
m29000| Thu Jun 14 01:30:45 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:30:45 [interruptThread] now exiting
m29000| Thu Jun 14 01:30:45 dbexit:
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:30:45 [interruptThread] closing listening socket: 28
m29000| Thu Jun 14 01:30:45 [interruptThread] closing listening socket: 29
m29000| Thu Jun 14 01:30:45 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:30:45 [interruptThread] mmf close
m29000| Thu Jun 14 01:30:45 [interruptThread] mmf close /data/db/test-config0/config.ns
m29000| Thu Jun 14 01:30:45 [interruptThread] mmf close /data/db/test-config0/config.0
m29000| Thu Jun 14 01:30:45 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:30:45 [interruptThread] shutdown: groupCommitMutex
m29000| Thu Jun 14 01:30:45 dbexit: really exiting now
Thu Jun 14 01:30:45 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs1
Thu Jun 14 01:30:45 [ReplicaSetMonitorWatcher] All nodes for set test-rs1 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
Thu Jun 14 01:30:46 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 5.489 seconds ***
33478.270054ms
Thu Jun 14 01:30:46 [initandlisten] connection accepted from 127.0.0.1:59394 #17 (4 connections now open)
*******************************************
Test : count1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/count1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/count1.js";TestData.testFile = "count1.js";TestData.testName = "count1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:30:46 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/count10'
Thu Jun 14 01:30:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/count10
m30000| Thu Jun 14 01:30:46
m30000| Thu Jun 14 01:30:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:30:46
m30000| Thu Jun 14 01:30:46 [initandlisten] MongoDB starting : pid=23683 port=30000 dbpath=/data/db/count10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:30:46 [initandlisten]
m30000| Thu Jun 14 01:30:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:30:46 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:30:46 [initandlisten]
m30000| Thu Jun 14 01:30:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:30:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:30:46 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:30:46 [initandlisten]
m30000| Thu Jun 14 01:30:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:30:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:30:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:30:46 [initandlisten] options: { dbpath: "/data/db/count10", port: 30000 }
m30000| Thu Jun 14 01:30:46 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:30:46 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/count11'
m30000| Thu Jun 14 01:30:46 [initandlisten] connection accepted from 127.0.0.1:39395 #1 (1 connection now open)
Thu Jun 14 01:30:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/count11
m30001| Thu Jun 14 01:30:46
m30001| Thu Jun 14 01:30:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:30:46
m30001| Thu Jun 14 01:30:46 [initandlisten] MongoDB starting : pid=23696 port=30001 dbpath=/data/db/count11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:30:46 [initandlisten]
m30001| Thu Jun 14 01:30:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:30:46 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:30:46 [initandlisten]
m30001| Thu Jun 14 01:30:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:30:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:30:46 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:30:46 [initandlisten]
m30001| Thu Jun 14 01:30:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:30:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:30:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:30:46 [initandlisten] options: { dbpath: "/data/db/count11", port: 30001 }
m30001| Thu Jun 14 01:30:46 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:30:46 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:30:46 [initandlisten] connection accepted from 127.0.0.1:59287 #1 (1 connection now open)
m30000| Thu Jun 14 01:30:46 [initandlisten] connection accepted from 127.0.0.1:39398 #2 (2 connections now open)
ShardingTest count1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:30:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:30:46 [FileAllocator] allocating new datafile /data/db/count10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:30:46 [FileAllocator] creating directory /data/db/count10/_tmp
m30999| Thu Jun 14 01:30:46 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:30:46 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23711 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:30:46 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:30:46 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:30:46 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:30:46 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:30:46 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:46 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:46 [initandlisten] connection accepted from 127.0.0.1:39399 #3 (3 connections now open)
m30999| Thu Jun 14 01:30:46 [mongosMain] connected connection!
m30000| Thu Jun 14 01:30:46 [FileAllocator] done allocating datafile /data/db/count10/config.ns, size: 16MB, took 0.262 secs
m30000| Thu Jun 14 01:30:46 [FileAllocator] allocating new datafile /data/db/count10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:30:47 [FileAllocator] done allocating datafile /data/db/count10/config.0, size: 16MB, took 0.275 secs
m30999| Thu Jun 14 01:30:47 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:47 [mongosMain] connected connection!
m30999| Thu Jun 14 01:30:47 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:30:47 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:47 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:30:47 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:47 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:30:47 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:30:47 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:30:47 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:30:47
m30999| Thu Jun 14 01:30:47 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:47 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:47 [Balancer] connected connection!
m30999| Thu Jun 14 01:30:47 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:30:47 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:30:47 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651847:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651847:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651847:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:47 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97707e30823f0d8c9edd5" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:30:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651847:1804289383' acquired, ts : 4fd97707e30823f0d8c9edd5
m30999| Thu Jun 14 01:30:47 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:30:47 [Balancer] no collections to balance
m30999| Thu Jun 14 01:30:47 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:30:47 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:30:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651847:1804289383' unlocked.
m30999| Thu Jun 14 01:30:47 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651847:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:30:47 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:47 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651847:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:30:47 [FileAllocator] allocating new datafile /data/db/count10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:30:47 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn2] insert config.settings keyUpdates:0 locks(micros) w:550372 550ms
m30000| Thu Jun 14 01:30:47 [initandlisten] connection accepted from 127.0.0.1:39403 #4 (4 connections now open)
m30000| Thu Jun 14 01:30:47 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:30:47 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:30:47 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [initandlisten] connection accepted from 127.0.0.1:39404 #5 (5 connections now open)
m30000| Thu Jun 14 01:30:47 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:47 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:30:47 [mongosMain] connection accepted from 127.0.0.1:51492 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:30:47 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:30:47 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:30:47 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30000| Thu Jun 14 01:30:47 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:30:47 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:30:47 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:47 [initandlisten] connection accepted from 127.0.0.1:59296 #2 (2 connections now open)
m30999| Thu Jun 14 01:30:47 [conn] connected connection!
m30999| Thu Jun 14 01:30:47 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:30:47 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:47 [initandlisten] connection accepted from 127.0.0.1:39407 #6 (6 connections now open)
m30999| Thu Jun 14 01:30:47 [conn] connected connection!
m30999| Thu Jun 14 01:30:47 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97707e30823f0d8c9edd4
m30999| Thu Jun 14 01:30:47 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:30:47 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:30:47 [initandlisten] connection accepted from 127.0.0.1:59298 #3 (3 connections now open)
m30999| Thu Jun 14 01:30:47 [conn] connected connection!
m30999| Thu Jun 14 01:30:47 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97707e30823f0d8c9edd4
m30999| Thu Jun 14 01:30:47 [conn] initializing shard connection to localhost:30001
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:30:47 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:30:47 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:30:47 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:30:47 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:30:47 [FileAllocator] allocating new datafile /data/db/count11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:30:47 [FileAllocator] creating directory /data/db/count11/_tmp
m30000| Thu Jun 14 01:30:47 [FileAllocator] done allocating datafile /data/db/count10/config.1, size: 32MB, took 0.62 secs
m30001| Thu Jun 14 01:30:48 [FileAllocator] done allocating datafile /data/db/count11/test.ns, size: 16MB, took 0.358 secs
m30001| Thu Jun 14 01:30:48 [FileAllocator] allocating new datafile /data/db/count11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:30:48 [FileAllocator] done allocating datafile /data/db/count11/test.0, size: 16MB, took 0.243 secs
m30001| Thu Jun 14 01:30:48 [FileAllocator] allocating new datafile /data/db/count11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:30:48 [conn3] build index test.bar { _id: 1 }
m30001| Thu Jun 14 01:30:48 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:30:48 [conn3] insert test.bar keyUpdates:0 locks(micros) W:59 w:1150636 1150ms
m30999| Thu Jun 14 01:30:48 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:30:48 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:48 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:48 [conn] connected connection!
m30001| Thu Jun 14 01:30:48 [initandlisten] connection accepted from 127.0.0.1:59299 #4 (4 connections now open)
m30999| Thu Jun 14 01:30:48 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { name: 1.0 } }
m30999| Thu Jun 14 01:30:48 [conn] enable sharding on: test.foo with shard key: { name: 1.0 }
m30999| Thu Jun 14 01:30:48 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd97708e30823f0d8c9edd6
m30999| Thu Jun 14 01:30:48 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd97708e30823f0d8c9edd6 based on: (empty)
m30999| Thu Jun 14 01:30:48 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0000", shardHost: "localhost:30000" } 0xa480ce0
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0001", shardHost: "localhost:30001" } 0xa481630
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa481630
m30001| Thu Jun 14 01:30:48 [conn4] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:30:48 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:30:48 [conn4] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:30:48 [conn4] build index test.foo { name: 1.0 }
m30001| Thu Jun 14 01:30:48 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:30:48 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:30:48 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:30:48 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:48 [initandlisten] connection accepted from 127.0.0.1:39410 #7 (7 connections now open)
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:48 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey } dataWritten: 7396502 splitThreshold: 921
m30999| Thu Jun 14 01:30:48 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:48 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey } dataWritten: 202 splitThreshold: 921
m30999| Thu Jun 14 01:30:48 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:48 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey }
m30001| Thu Jun 14 01:30:48 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30000| Thu Jun 14 01:30:48 [initandlisten] connection accepted from 127.0.0.1:39411 #8 (8 connections now open)
m30001| Thu Jun 14 01:30:48 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: MinKey }, max: { name: MaxKey }, from: "shard0001", splitKeys: [ { name: "allan" } ], shardId: "test.foo-name_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:48 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' acquired, ts : 4fd97708e931319427686eaa
m30001| Thu Jun 14 01:30:48 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651848:656486995 (sleeping for 30000ms)
m30000| Thu Jun 14 01:30:48 [initandlisten] connection accepted from 127.0.0.1:39412 #9 (9 connections now open)
m30001| Thu Jun 14 01:30:48 [conn4] splitChunk accepted at version 1|0||4fd97708e30823f0d8c9edd6
m30001| Thu Jun 14 01:30:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:48-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651848376), what: "split", ns: "test.foo", details: { before: { min: { name: MinKey }, max: { name: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: MinKey }, max: { name: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') }, right: { min: { name: "allan" }, max: { name: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') } } }
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' unlocked.
m30999| Thu Jun 14 01:30:48 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd97708e30823f0d8c9edd6 based on: 1|0||4fd97708e30823f0d8c9edd6
m30999| Thu Jun 14 01:30:48 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { name: "allan" } max: { name: MaxKey }
m30001| Thu Jun 14 01:30:48 [conn4] request split points lookup for chunk test.foo { : "allan" } -->> { : MaxKey }
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0001", shardHost: "localhost:30001" } 0xa481630
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), ok: 1.0 }
m30001| Thu Jun 14 01:30:48 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: "allan" }, max: { name: MaxKey }, from: "shard0001", splitKeys: [ { name: "sara" } ], shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:48 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' acquired, ts : 4fd97708e931319427686eab
m30001| Thu Jun 14 01:30:48 [conn4] splitChunk accepted at version 1|2||4fd97708e30823f0d8c9edd6
m30001| Thu Jun 14 01:30:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:48-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651848384), what: "split", ns: "test.foo", details: { before: { min: { name: "allan" }, max: { name: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: "allan" }, max: { name: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') }, right: { min: { name: "sara" }, max: { name: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') } } }
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' unlocked.
m30999| Thu Jun 14 01:30:48 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd97708e30823f0d8c9edd6 based on: 1|2||4fd97708e30823f0d8c9edd6
m30999| Thu Jun 14 01:30:48 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { name: "allan" } max: { name: "sara" }
m30001| Thu Jun 14 01:30:48 [conn4] request split points lookup for chunk test.foo { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:30:48 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: "allan" }, max: { name: "sara" }, from: "shard0001", splitKeys: [ { name: "joe" } ], shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:48 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' acquired, ts : 4fd97708e931319427686eac
m30001| Thu Jun 14 01:30:48 [conn4] splitChunk accepted at version 1|4||4fd97708e30823f0d8c9edd6
m30001| Thu Jun 14 01:30:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:48-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651848389), what: "split", ns: "test.foo", details: { before: { min: { name: "allan" }, max: { name: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: "allan" }, max: { name: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') }, right: { min: { name: "joe" }, max: { name: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97708e30823f0d8c9edd6') } } }
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' unlocked.
m30999| Thu Jun 14 01:30:48 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd97708e30823f0d8c9edd6 based on: 1|4||4fd97708e30823f0d8c9edd6
ShardingTest test.foo-name_MinKey 1000|1 { "name" : { $minKey : 1 } } -> { "name" : "allan" } shard0001 test.foo
test.foo-name_"allan" 1000|5 { "name" : "allan" } -> { "name" : "joe" } shard0001 test.foo
test.foo-name_"joe" 1000|6 { "name" : "joe" } -> { "name" : "sara" } shard0001 test.foo
test.foo-name_"sara" 1000|4 { "name" : "sara" } -> { "name" : { $maxKey : 1 } } shard0001 test.foo
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0001", shardHost: "localhost:30001" } 0xa481630
m30999| Thu Jun 14 01:30:48 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), ok: 1.0 }
m30999| Thu Jun 14 01:30:48 [conn] CMD: movechunk: { movechunk: "test.foo", find: { name: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:30:48 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { name: "allan" } max: { name: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:30:48 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: "allan" }, max: { name: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:30:48 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:30:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' acquired, ts : 4fd97708e931319427686ead
m30001| Thu Jun 14 01:30:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:48-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651848394), what: "moveChunk.start", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:48 [conn4] moveChunk request accepted at version 1|6||4fd97708e30823f0d8c9edd6
m30001| Thu Jun 14 01:30:48 [conn4] moveChunk number of documents: 3
m30001| Thu Jun 14 01:30:48 [initandlisten] connection accepted from 127.0.0.1:59303 #5 (5 connections now open)
m30000| Thu Jun 14 01:30:48 [FileAllocator] allocating new datafile /data/db/count10/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:30:49 [FileAllocator] done allocating datafile /data/db/count11/test.1, size: 32MB, took 0.8 secs
m30000| Thu Jun 14 01:30:49 [FileAllocator] done allocating datafile /data/db/count10/test.ns, size: 16MB, took 0.876 secs
m30000| Thu Jun 14 01:30:49 [FileAllocator] allocating new datafile /data/db/count10/test.0, filling with zeroes...
m30001| Thu Jun 14 01:30:49 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:30:49 [FileAllocator] done allocating datafile /data/db/count10/test.0, size: 16MB, took 0.305 secs
m30000| Thu Jun 14 01:30:49 [FileAllocator] allocating new datafile /data/db/count10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:30:49 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:30:49 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:49 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:30:49 [migrateThread] build index test.foo { name: 1.0 }
m30000| Thu Jun 14 01:30:49 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:49 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: "allan" } -> { name: "joe" }
m30000| Thu Jun 14 01:30:50 [FileAllocator] done allocating datafile /data/db/count10/test.1, size: 32MB, took 0.55 secs
m30001| Thu Jun 14 01:30:50 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 100, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:30:50 [conn4] moveChunk setting version to: 2|0||4fd97708e30823f0d8c9edd6
m30000| Thu Jun 14 01:30:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: "allan" } -> { name: "joe" }
m30000| Thu Jun 14 01:30:50 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:50-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651850404), what: "moveChunk.to", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, step1 of 5: 1204, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 783 } }
m30000| Thu Jun 14 01:30:50 [initandlisten] connection accepted from 127.0.0.1:51094 #10 (10 connections now open)
m30999| Thu Jun 14 01:30:50 [conn] moveChunk result: { ok: 1.0 }
m30001| Thu Jun 14 01:30:50 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 100, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:30:50 [conn4] moveChunk updating self version to: 2|1||4fd97708e30823f0d8c9edd6 through { name: MinKey } -> { name: "allan" } for collection 'test.foo'
m30001| Thu Jun 14 01:30:50 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:50-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651850409), what: "moveChunk.commit", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:30:50 [conn4] doing delete inline
m30001| Thu Jun 14 01:30:50 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:30:50 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651848:656486995' unlocked.
m30001| Thu Jun 14 01:30:50 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:30:50-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:59299", time: new Date(1339651850409), what: "moveChunk.from", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2005, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:30:50 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: "allan" }, max: { name: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:809 w:1329 reslen:37 2016ms
m30999| Thu Jun 14 01:30:50 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 2|1||4fd97708e30823f0d8c9edd6 based on: 1|6||4fd97708e30823f0d8c9edd6
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0000", shardHost: "localhost:30000" } 0xa480ce0
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa480ce0
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), serverID: ObjectId('4fd97707e30823f0d8c9edd4'), shard: "shard0001", shardHost: "localhost:30001" } 0xa481630
m30999| Thu Jun 14 01:30:50 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97708e30823f0d8c9edd6'), ok: 1.0 }
m30000| Thu Jun 14 01:30:50 [conn6] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:30:50 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { name: "sara" } max: { name: MaxKey } dataWritten: 7793196 splitThreshold: 11796480
m30999| Thu Jun 14 01:30:50 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:30:55 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:30:55 [conn3] end connection 127.0.0.1:39399 (9 connections now open)
m30000| Thu Jun 14 01:30:55 [conn5] end connection 127.0.0.1:39404 (8 connections now open)
m30001| Thu Jun 14 01:30:55 [conn3] end connection 127.0.0.1:59298 (4 connections now open)
m30001| Thu Jun 14 01:30:55 [conn4] end connection 127.0.0.1:59299 (3 connections now open)
m30000| Thu Jun 14 01:30:55 [conn6] end connection 127.0.0.1:39407 (7 connections now open)
Thu Jun 14 01:30:56 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:30:56 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:30:56 [interruptThread] now exiting
m30000| Thu Jun 14 01:30:56 dbexit:
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:30:56 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:30:56 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:30:56 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:30:56 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:30:56 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:30:56 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:30:56 dbexit: really exiting now
m30001| Thu Jun 14 01:30:56 [conn5] end connection 127.0.0.1:59303 (2 connections now open)
Thu Jun 14 01:30:57 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:30:57 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:30:57 [interruptThread] now exiting
m30001| Thu Jun 14 01:30:57 dbexit:
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:30:57 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:30:57 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:30:57 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:30:57 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:30:57 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:30:57 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:30:57 dbexit: really exiting now
Thu Jun 14 01:30:58 shell: stopped mongo program on port 30001
*** ShardingTest count1 completed successfully in 12.377 seconds ***
12433.327913ms
Thu Jun 14 01:30:58 [initandlisten] connection accepted from 127.0.0.1:54598 #18 (5 connections now open)
*******************************************
Test : count2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/count2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/count2.js";TestData.testFile = "count2.js";TestData.testName = "count2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:30:58 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/count20'
Thu Jun 14 01:30:58 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/count20
m30000| Thu Jun 14 01:30:58
m30000| Thu Jun 14 01:30:58 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:30:58
m30000| Thu Jun 14 01:30:58 [initandlisten] MongoDB starting : pid=23756 port=30000 dbpath=/data/db/count20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:30:58 [initandlisten]
m30000| Thu Jun 14 01:30:58 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:30:58 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:30:58 [initandlisten]
m30000| Thu Jun 14 01:30:58 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:30:58 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:30:58 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:30:58 [initandlisten]
m30000| Thu Jun 14 01:30:58 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:30:58 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:30:58 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:30:58 [initandlisten] options: { dbpath: "/data/db/count20", port: 30000 }
m30000| Thu Jun 14 01:30:58 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:30:58 [initandlisten] waiting for connections on port 30000
Resetting db path '/data/db/count21'
m30000| Thu Jun 14 01:30:58 [initandlisten] connection accepted from 127.0.0.1:51097 #1 (1 connection now open)
Thu Jun 14 01:30:58 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/count21
m30001| Thu Jun 14 01:30:58
m30001| Thu Jun 14 01:30:58 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:30:58
m30001| Thu Jun 14 01:30:58 [initandlisten] MongoDB starting : pid=23769 port=30001 dbpath=/data/db/count21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:30:58 [initandlisten]
m30001| Thu Jun 14 01:30:58 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:30:58 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:30:58 [initandlisten]
m30001| Thu Jun 14 01:30:58 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:30:58 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:30:58 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:30:58 [initandlisten]
m30001| Thu Jun 14 01:30:58 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:30:58 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:30:58 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:30:58 [initandlisten] options: { dbpath: "/data/db/count21", port: 30001 }
m30001| Thu Jun 14 01:30:58 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:30:58 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:30:58 [initandlisten] connection accepted from 127.0.0.1:42506 #1 (1 connection now open)
ShardingTest count2 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:30:58 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:30:58 [initandlisten] connection accepted from 127.0.0.1:51100 #2 (2 connections now open)
m30000| Thu Jun 14 01:30:58 [FileAllocator] allocating new datafile /data/db/count20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:30:58 [FileAllocator] creating directory /data/db/count20/_tmp
m30000| Thu Jun 14 01:30:58 [initandlisten] connection accepted from 127.0.0.1:51102 #3 (3 connections now open)
m30999| Thu Jun 14 01:30:58 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:30:58 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23783 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:30:58 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:30:58 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:30:58 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:30:58 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:30:58 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:58 [mongosMain] connected connection!
m30000| Thu Jun 14 01:30:59 [FileAllocator] done allocating datafile /data/db/count20/config.ns, size: 16MB, took 0.256 secs
m30000| Thu Jun 14 01:30:59 [FileAllocator] allocating new datafile /data/db/count20/config.0, filling with zeroes...
m30999| Thu Jun 14 01:30:59 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:59 [mongosMain] connected connection!
m30999| Thu Jun 14 01:30:59 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:30:59 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:59 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:30:59 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:30:59 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:30:59 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:30:59 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:30:59 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:30:59
m30999| Thu Jun 14 01:30:59 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:30:59 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:59 [Balancer] connected connection!
m30999| Thu Jun 14 01:30:59 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:30:59 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:30:59 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651859:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651859:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651859:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:30:59 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977130f3641cb70fe65ef" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:30:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651859:1804289383' acquired, ts : 4fd977130f3641cb70fe65ef
m30999| Thu Jun 14 01:30:59 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:30:59 [Balancer] no collections to balance
m30999| Thu Jun 14 01:30:59 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:30:59 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:30:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651859:1804289383' unlocked.
m30999| Thu Jun 14 01:30:59 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651859:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:30:59 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:59 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651859:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:30:59 [FileAllocator] done allocating datafile /data/db/count20/config.0, size: 16MB, took 0.323 secs
m30000| Thu Jun 14 01:30:59 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn2] insert config.settings keyUpdates:0 locks(micros) w:595049 594ms
m30000| Thu Jun 14 01:30:59 [FileAllocator] allocating new datafile /data/db/count20/config.1, filling with zeroes...
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51105 #4 (4 connections now open)
m30000| Thu Jun 14 01:30:59 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:30:59 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:30:59 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51106 #5 (5 connections now open)
m30000| Thu Jun 14 01:30:59 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:30:59 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:30:59 [mongosMain] connection accepted from 127.0.0.1:43086 #1 (1 connection now open)
Thu Jun 14 01:30:59 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51108 #6 (6 connections now open)
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51109 #7 (7 connections now open)
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51110 #8 (8 connections now open)
m30998| Thu Jun 14 01:30:59 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:30:59 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23803 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:30:59 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:30:59 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:30:59 [mongosMain] options: { configdb: "localhost:30000", port: 30998, verbose: true }
m30998| Thu Jun 14 01:30:59 [mongosMain] config string : localhost:30000
m30998| Thu Jun 14 01:30:59 [mongosMain] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:30:59 [mongosMain] connected connection!
m30998| Thu Jun 14 01:30:59 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:30:59 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:59 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:30:59 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:30:59 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:30:59 [Balancer] connected connection!
m30998| Thu Jun 14 01:30:59 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:30:59 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:30:59
m30998| Thu Jun 14 01:30:59 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:30:59 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:30:59 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:30:59 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:30:59 [Balancer] connected connection!
m30998| Thu Jun 14 01:30:59 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:30:59 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651859:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339651859:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339651859:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:30:59 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd9771384b9585d3c5b9f59" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd977130f3641cb70fe65ef" } }
m30998| Thu Jun 14 01:30:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651859:1804289383' acquired, ts : 4fd9771384b9585d3c5b9f59
m30998| Thu Jun 14 01:30:59 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:30:59 [Balancer] no collections to balance
m30998| Thu Jun 14 01:30:59 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:30:59 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:30:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339651859:1804289383' unlocked.
m30998| Thu Jun 14 01:30:59 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339651859:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:30:59 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:30:59 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30998:1339651859:1804289383', sleeping for 30000ms
m30998| Thu Jun 14 01:30:59 [mongosMain] connection accepted from 127.0.0.1:45623 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:30:59 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:30:59 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:30:59 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:30:59 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:30:59 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:59 [conn] connected connection!
m30999| Thu Jun 14 01:30:59 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:30:59 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:51113 #9 (9 connections now open)
m30999| Thu Jun 14 01:30:59 [conn] connected connection!
m30999| Thu Jun 14 01:30:59 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977130f3641cb70fe65ee
m30999| Thu Jun 14 01:30:59 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:30:59 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:59 [conn] connected connection!
m30999| Thu Jun 14 01:30:59 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977130f3641cb70fe65ee
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:30:59 [conn] initializing shard connection to localhost:30001
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:30:59 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:30:59 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:30:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:30:59 [conn] connected connection!
m30999| Thu Jun 14 01:30:59 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:30:59 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:30:59 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:30:59 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { name: 1.0 } }
m30999| Thu Jun 14 01:30:59 [conn] enable sharding on: test.foo with shard key: { name: 1.0 }
m30999| Thu Jun 14 01:30:59 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd977130f3641cb70fe65f0
m30999| Thu Jun 14 01:30:59 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd977130f3641cb70fe65f0 based on: (empty)
m30000| Thu Jun 14 01:30:59 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:30:59 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:30:59 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:30:59 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5c6720
m30999| Thu Jun 14 01:30:59 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:30:59 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0001", shardHost: "localhost:30001" } 0xa5c84b8
m30001| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:42519 #2 (2 connections now open)
m30001| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:42521 #3 (3 connections now open)
m30001| Thu Jun 14 01:30:59 [initandlisten] connection accepted from 127.0.0.1:42522 #4 (4 connections now open)
m30001| Thu Jun 14 01:30:59 [FileAllocator] allocating new datafile /data/db/count21/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:30:59 [FileAllocator] creating directory /data/db/count21/_tmp
m30000| Thu Jun 14 01:31:00 [FileAllocator] done allocating datafile /data/db/count20/config.1, size: 32MB, took 0.584 secs
m30001| Thu Jun 14 01:31:00 [FileAllocator] done allocating datafile /data/db/count21/test.ns, size: 16MB, took 0.375 secs
m30001| Thu Jun 14 01:31:00 [FileAllocator] allocating new datafile /data/db/count21/test.0, filling with zeroes...
m30001| Thu Jun 14 01:31:00 [FileAllocator] done allocating datafile /data/db/count21/test.0, size: 16MB, took 0.289 secs
m30001| Thu Jun 14 01:31:00 [conn4] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:31:00 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:31:00 [conn4] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:31:00 [conn4] build index test.foo { name: 1.0 }
m30001| Thu Jun 14 01:31:00 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:31:00 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:80 r:249 w:1172774 1172ms
m30001| Thu Jun 14 01:31:00 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 reslen:173 1170ms
m30001| Thu Jun 14 01:31:00 [FileAllocator] allocating new datafile /data/db/count21/test.1, filling with zeroes...
m30001| Thu Jun 14 01:31:00 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:51116 #10 (10 connections now open)
m30999| Thu Jun 14 01:31:00 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:31:00 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0xa5c84b8
m30999| Thu Jun 14 01:31:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:31:00 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey } dataWritten: 7396469 splitThreshold: 921
m30999| Thu Jun 14 01:31:00 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:31:00 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey }
m30000| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:51117 #11 (11 connections now open)
m30001| Thu Jun 14 01:31:00 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: MinKey }, max: { name: MaxKey }, from: "shard0001", splitKeys: [ { name: "ddd" } ], shardId: "test.foo-name_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:31:00 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:31:00 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651860:467186830' acquired, ts : 4fd977146bd1ea475ff94b5c
m30001| Thu Jun 14 01:31:00 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651860:467186830 (sleeping for 30000ms)
m30001| Thu Jun 14 01:31:00 [conn4] splitChunk accepted at version 1|0||4fd977130f3641cb70fe65f0
m30001| Thu Jun 14 01:31:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:31:00-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42522", time: new Date(1339651860869), what: "split", ns: "test.foo", details: { before: { min: { name: MinKey }, max: { name: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: MinKey }, max: { name: "ddd" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977130f3641cb70fe65f0') }, right: { min: { name: "ddd" }, max: { name: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977130f3641cb70fe65f0') } } }
m30001| Thu Jun 14 01:31:00 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651860:467186830' unlocked.
m30000| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:51118 #12 (12 connections now open)
m30999| Thu Jun 14 01:31:00 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd977130f3641cb70fe65f0 based on: 1|0||4fd977130f3641cb70fe65f0
m30999| Thu Jun 14 01:31:00 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0001", shardHost: "localhost:30001" } 0xa5c84b8
m30999| Thu Jun 14 01:31:00 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), ok: 1.0 }
m30000| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:51119 #13 (13 connections now open)
m30001| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:42527 #5 (5 connections now open)
m30001| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:42528 #6 (6 connections now open)
m30998| Thu Jun 14 01:31:00 [conn] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "shard0001" }
m30998| Thu Jun 14 01:31:00 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|2||4fd977130f3641cb70fe65f0 based on: (empty)
m30998| Thu Jun 14 01:31:00 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:31:00 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:31:00 [conn] connected connection!
m30998| Thu Jun 14 01:31:00 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9771384b9585d3c5b9f58
m30998| Thu Jun 14 01:31:00 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:31:00 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30998| Thu Jun 14 01:31:00 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd9771384b9585d3c5b9f58'), shard: "shard0000", shardHost: "localhost:30000" } 0x8749c50
m30998| Thu Jun 14 01:31:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:31:00 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:31:00 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:31:00 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:31:00 [conn] connected connection!
m30998| Thu Jun 14 01:31:00 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9771384b9585d3c5b9f58
m30998| Thu Jun 14 01:31:00 BackgroundJob starting: WriteBackListener-localhost:30001
m30998| Thu Jun 14 01:31:00 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:31:00 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:31:00 [conn] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:31:00 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd9771384b9585d3c5b9f58'), shard: "shard0001", shardHost: "localhost:30001" } 0x874a188
m30998| Thu Jun 14 01:31:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:31:00 [WriteBackListener-localhost:30001] connected connection!
ShardingTest test.foo-name_MinKey 1000|1 { "name" : { $minKey : 1 } } -> { "name" : "ddd" } shard0001 test.foo
test.foo-name_"ddd" 1000|2 { "name" : "ddd" } -> { "name" : { $maxKey : 1 } } shard0001 test.foo
m30999| Thu Jun 14 01:31:00 [conn] CMD: movechunk: { movechunk: "test.foo", find: { name: "aaa" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:31:00 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { name: MinKey } max: { name: "ddd" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:31:00 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: MinKey }, max: { name: "ddd" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:31:00 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:31:00 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651860:467186830' acquired, ts : 4fd977146bd1ea475ff94b5d
m30001| Thu Jun 14 01:31:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:31:00-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42522", time: new Date(1339651860880), what: "moveChunk.start", ns: "test.foo", details: { min: { name: MinKey }, max: { name: "ddd" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:31:00 [conn4] moveChunk request accepted at version 1|2||4fd977130f3641cb70fe65f0
m30001| Thu Jun 14 01:31:00 [conn4] moveChunk number of documents: 3
m30001| Thu Jun 14 01:31:00 [initandlisten] connection accepted from 127.0.0.1:42529 #7 (7 connections now open)
m30000| Thu Jun 14 01:31:00 [FileAllocator] allocating new datafile /data/db/count20/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:31:01 [FileAllocator] done allocating datafile /data/db/count20/test.ns, size: 16MB, took 0.638 secs
m30000| Thu Jun 14 01:31:01 [FileAllocator] allocating new datafile /data/db/count20/test.0, filling with zeroes...
m30001| Thu Jun 14 01:31:01 [FileAllocator] done allocating datafile /data/db/count21/test.1, size: 32MB, took 0.89 secs
m30001| Thu Jun 14 01:31:01 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: MinKey }, max: { name: "ddd" }, shardKeyPattern: { name: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:31:02 [FileAllocator] done allocating datafile /data/db/count20/test.0, size: 16MB, took 0.461 secs
m30000| Thu Jun 14 01:31:02 [FileAllocator] allocating new datafile /data/db/count20/test.1, filling with zeroes...
m30000| Thu Jun 14 01:31:02 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:31:02 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:31:02 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:31:02 [migrateThread] build index test.foo { name: 1.0 }
m30000| Thu Jun 14 01:31:02 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:31:02 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: MinKey } -> { name: "ddd" }
m30000| Thu Jun 14 01:31:02 [FileAllocator] done allocating datafile /data/db/count20/test.1, size: 32MB, took 0.825 secs
m30001| Thu Jun 14 01:31:02 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: MinKey }, max: { name: "ddd" }, shardKeyPattern: { name: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 108, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:31:02 [conn4] moveChunk setting version to: 2|0||4fd977130f3641cb70fe65f0
m30000| Thu Jun 14 01:31:02 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: MinKey } -> { name: "ddd" }
m30000| Thu Jun 14 01:31:02 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:31:02-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651862897), what: "moveChunk.to", ns: "test.foo", details: { min: { name: MinKey }, max: { name: "ddd" }, step1 of 5: 1122, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 891 } }
m30000| Thu Jun 14 01:31:02 [initandlisten] connection accepted from 127.0.0.1:51123 #14 (14 connections now open)
m30001| Thu Jun 14 01:31:02 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { name: MinKey }, max: { name: "ddd" }, shardKeyPattern: { name: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 108, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:31:02 [conn4] moveChunk updating self version to: 2|1||4fd977130f3641cb70fe65f0 through { name: "ddd" } -> { name: MaxKey } for collection 'test.foo'
m30001| Thu Jun 14 01:31:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:31:02-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42522", time: new Date(1339651862901), what: "moveChunk.commit", ns: "test.foo", details: { min: { name: MinKey }, max: { name: "ddd" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:31:02 [conn4] doing delete inline
m30001| Thu Jun 14 01:31:02 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:31:02 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651860:467186830' unlocked.
m30001| Thu Jun 14 01:31:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:31:02-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42522", time: new Date(1339651862902), what: "moveChunk.from", ns: "test.foo", details: { min: { name: MinKey }, max: { name: "ddd" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2004, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:31:02 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: MinKey }, max: { name: "ddd" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:80 r:361 w:1173091 reslen:37 2023ms
m30999| Thu Jun 14 01:31:02 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:31:02 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||4fd977130f3641cb70fe65f0 based on: 1|2||4fd977130f3641cb70fe65f0
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0000", shardHost: "localhost:30000" } 0xa5c6720
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa5c6720
m30000| Thu Jun 14 01:31:02 [conn9] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd977130f3641cb70fe65ee'), shard: "shard0001", shardHost: "localhost:30001" } 0xa5c84b8
m30999| Thu Jun 14 01:31:02 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), ok: 1.0 }
before sleep: Thu Jun 14 2012 01:31:02 GMT-0400 (EDT)
after sleep: Thu Jun 14 2012 01:31:04 GMT-0400 (EDT)
ShardingTest test.foo-name_MinKey 2000|0 { "name" : { $minKey : 1 } } -> { "name" : "ddd" } shard0000 test.foo
test.foo-name_"ddd" 2000|1 { "name" : "ddd" } -> { "name" : { $maxKey : 1 } } shard0001 test.foo
m30001| Thu Jun 14 01:31:04 [conn5] assertion 13388 [test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 2 does not match received 1 ( ns : test.foo, received : 1|2||4fd977130f3641cb70fe65f0, wanted : 2|0||4fd977130f3641cb70fe65f0, send ) ( ns : test.foo, received : 1|2||4fd977130f3641cb70fe65f0, wanted : 2|0||4fd977130f3641cb70fe65f0, send ) ns:test.$cmd query:{ count: "foo", query: { name: { $gte: "aaa", $lt: "ddd" } } }
m30001| Thu Jun 14 01:31:04 [conn5] ntoskip:0 ntoreturn:1
m30001| Thu Jun 14 01:31:04 [conn5] { $err: "[test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 2 does not match received 1 ( ns : te...", code: 13388, ns: "test.foo", vReceived: Timestamp 1000|2, vReceivedEpoch: ObjectId('4fd977130f3641cb70fe65f0'), vWanted: Timestamp 2000|0, vWantedEpoch: ObjectId('4fd977130f3641cb70fe65f0') }
m30998| Thu Jun 14 01:31:04 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||4fd977130f3641cb70fe65f0 based on: 1|2||4fd977130f3641cb70fe65f0
m30998| Thu Jun 14 01:31:04 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd9771384b9585d3c5b9f58'), shard: "shard0000", shardHost: "localhost:30000" } 0x8749c50
m30998| Thu Jun 14 01:31:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:31:04 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), serverID: ObjectId('4fd9771384b9585d3c5b9f58'), shard: "shard0001", shardHost: "localhost:30001" } 0x874a188
m30998| Thu Jun 14 01:31:04 [conn] setShardVersion success: { oldVersion: Timestamp 1000|2, oldVersionEpoch: ObjectId('4fd977130f3641cb70fe65f0'), ok: 1.0 }
m30999| Thu Jun 14 01:31:04 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:31:04 [conn3] end connection 127.0.0.1:51102 (13 connections now open)
m30000| Thu Jun 14 01:31:04 [conn5] end connection 127.0.0.1:51106 (12 connections now open)
m30000| Thu Jun 14 01:31:04 [conn9] end connection 127.0.0.1:51113 (11 connections now open)
m30001| Thu Jun 14 01:31:04 [conn4] end connection 127.0.0.1:42522 (6 connections now open)
m30001| Thu Jun 14 01:31:04 [conn3] end connection 127.0.0.1:42521 (5 connections now open)
Thu Jun 14 01:31:05 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:31:05 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:31:05 [conn8] end connection 127.0.0.1:51110 (10 connections now open)
m30000| Thu Jun 14 01:31:05 [conn6] end connection 127.0.0.1:51108 (10 connections now open)
m30000| Thu Jun 14 01:31:05 [conn13] end connection 127.0.0.1:51119 (9 connections now open)
m30001| Thu Jun 14 01:31:05 [conn5] end connection 127.0.0.1:42527 (4 connections now open)
Thu Jun 14 01:31:06 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:31:06 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:31:06 [interruptThread] now exiting
m30000| Thu Jun 14 01:31:06 dbexit:
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:31:06 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:31:06 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:31:06 [interruptThread] closing listening socket: 16
m30000| Thu Jun 14 01:31:06 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:31:06 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:31:06 [conn7] end connection 127.0.0.1:42529 (3 connections now open)
m30000| Thu Jun 14 01:31:06 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:31:06 dbexit: really exiting now
Thu Jun 14 01:31:07 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:31:07 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:31:07 [interruptThread] now exiting
m30001| Thu Jun 14 01:31:07 dbexit:
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:31:07 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:31:07 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:31:07 [interruptThread] closing listening socket: 19
m30001| Thu Jun 14 01:31:07 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:31:07 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:31:07 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:31:07 dbexit: really exiting now
Thu Jun 14 01:31:08 shell: stopped mongo program on port 30001
*** ShardingTest count2 completed successfully in 10.431 seconds ***
10483.371973ms
Thu Jun 14 01:31:08 [initandlisten] connection accepted from 127.0.0.1:54627 #19 (6 connections now open)
*******************************************
Test : count_slaveok.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/count_slaveok.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/count_slaveok.js";TestData.testFile = "count_slaveok.js";TestData.testName = "count_slaveok";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:31:08 2012
MongoDB shell version: 2.1.2-pre-
null
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "countSlaveOk-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "countSlaveOk",
"shard" : 0,
"node" : 0,
"set" : "countSlaveOk-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/countSlaveOk-rs0-0'
Thu Jun 14 01:31:09 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet countSlaveOk-rs0 --dbpath /data/db/countSlaveOk-rs0-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:31:09
m31100| Thu Jun 14 01:31:09 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:31:09
m31100| Thu Jun 14 01:31:09 [initandlisten] MongoDB starting : pid=23853 port=31100 dbpath=/data/db/countSlaveOk-rs0-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:31:09 [initandlisten]
m31100| Thu Jun 14 01:31:09 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:31:09 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:31:09 [initandlisten]
m31100| Thu Jun 14 01:31:09 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:31:09 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:31:09 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:31:09 [initandlisten]
m31100| Thu Jun 14 01:31:09 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:31:09 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:31:09 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:31:09 [initandlisten] options: { dbpath: "/data/db/countSlaveOk-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "countSlaveOk-rs0", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:31:09 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:31:09 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:31:09 [initandlisten] connection accepted from 10.255.119.66:43211 #1 (1 connection now open)
m31100| Thu Jun 14 01:31:09 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:31:09 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Thu Jun 14 01:31:09 [initandlisten] connection accepted from 127.0.0.1:38987 #2 (2 connections now open)
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "countSlaveOk-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "countSlaveOk",
"shard" : 0,
"node" : 1,
"set" : "countSlaveOk-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/countSlaveOk-rs0-1'
Thu Jun 14 01:31:09 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet countSlaveOk-rs0 --dbpath /data/db/countSlaveOk-rs0-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:31:09
m31101| Thu Jun 14 01:31:09 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:31:09
m31101| Thu Jun 14 01:31:09 [initandlisten] MongoDB starting : pid=23869 port=31101 dbpath=/data/db/countSlaveOk-rs0-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:31:09 [initandlisten]
m31101| Thu Jun 14 01:31:09 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:31:09 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:31:09 [initandlisten]
m31101| Thu Jun 14 01:31:09 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:31:09 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:31:09 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:31:09 [initandlisten]
m31101| Thu Jun 14 01:31:09 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:31:09 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:31:09 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:31:09 [initandlisten] options: { dbpath: "/data/db/countSlaveOk-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "countSlaveOk-rs0", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:31:09 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:31:09 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:31:09 [initandlisten] connection accepted from 10.255.119.66:37589 #1 (1 connection now open)
m31101| Thu Jun 14 01:31:09 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:31:09 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
{
"replSetInitiate" : {
"_id" : "countSlaveOk-rs0",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
}
]
}
}
m31101| Thu Jun 14 01:31:09 [initandlisten] connection accepted from 127.0.0.1:39336 #2 (2 connections now open)
m31100| Thu Jun 14 01:31:09 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:31:09 [conn2] replSet replSetInitiate config object parses ok, 2 members specified
m31101| Thu Jun 14 01:31:09 [initandlisten] connection accepted from 10.255.119.66:37591 #3 (3 connections now open)
m31100| Thu Jun 14 01:31:09 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:31:09 [conn2] ******
m31100| Thu Jun 14 01:31:09 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:31:09 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:31:09 [FileAllocator] creating directory /data/db/countSlaveOk-rs0-0/_tmp
m31100| Thu Jun 14 01:31:09 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/local.ns, size: 16MB, took 0.251 secs
m31100| Thu Jun 14 01:31:09 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:31:11 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/local.0, size: 64MB, took 1.276 secs
m31100| Thu Jun 14 01:31:11 [conn2] ******
m31100| Thu Jun 14 01:31:11 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:31:11 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:31:11 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:31:11 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "countSlaveOk-rs0", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1567622 w:39 reslen:112 1567ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31100| Thu Jun 14 01:31:19 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:19 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:31:19 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31100| Thu Jun 14 01:31:19 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:31:19 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:31:19 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:19 [initandlisten] connection accepted from 10.255.119.66:43217 #3 (3 connections now open)
m31101| Thu Jun 14 01:31:19 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:31:19 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:31:19 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:31:19 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:31:19 [FileAllocator] creating directory /data/db/countSlaveOk-rs0-1/_tmp
m31101| Thu Jun 14 01:31:19 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/local.ns, size: 16MB, took 0.242 secs
m31101| Thu Jun 14 01:31:19 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/local.0, filling with zeroes...
m31101| Thu Jun 14 01:31:19 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/local.0, size: 16MB, took 0.265 secs
m31101| Thu Jun 14 01:31:19 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:31:19 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:31:19 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31101| Thu Jun 14 01:31:19 [rsSync] ******
m31101| Thu Jun 14 01:31:19 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:31:19 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/local.1, filling with zeroes...
m31101| Thu Jun 14 01:31:21 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/local.1, size: 64MB, took 1.208 secs
m31101| Thu Jun 14 01:31:21 [rsSync] ******
m31101| Thu Jun 14 01:31:21 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:31:21 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:31:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:31:21 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
m31101| Thu Jun 14 01:31:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:31:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31100| Thu Jun 14 01:31:27 [rsMgr] replSet info electSelf 0
m31101| Thu Jun 14 01:31:27 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:31:27 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:31:27 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:31:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31100| Thu Jun 14 01:31:29 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:31:29 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:31:29 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/admin.ns, size: 16MB, took 0.247 secs
m31100| Thu Jun 14 01:31:29 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/admin.0, filling with zeroes...
m31100| Thu Jun 14 01:31:29 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/admin.0, size: 16MB, took 0.295 secs
m31100| Thu Jun 14 01:31:29 [conn2] build index admin.foo { _id: 1 }
m31100| Thu Jun 14 01:31:29 [conn2] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:31:29 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:1567622 w:552889 552ms
ReplSetTest Timestamp(1339651889000, 1)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:31:37 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:37 [initandlisten] connection accepted from 10.255.119.66:43218 #4 (4 connections now open)
m31101| Thu Jun 14 01:31:37 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:31:37 [rsSync] build index done. scanned 0 total records. 0.05 secs
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:31:37 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:31:37 [initandlisten] connection accepted from 10.255.119.66:43219 #5 (5 connections now open)
m31101| Thu Jun 14 01:31:37 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:31:37 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/admin.ns, size: 16MB, took 0.259 secs
m31101| Thu Jun 14 01:31:37 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/admin.0, filling with zeroes...
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
m31101| Thu Jun 14 01:31:37 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/admin.0, size: 16MB, took 0.454 secs
m31101| Thu Jun 14 01:31:37 [rsSync] build index admin.foo { _id: 1 }
m31101| Thu Jun 14 01:31:37 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:31:37 [rsSync] build index done. scanned 1 total records. 0 secs
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:31:37 [conn5] end connection 10.255.119.66:43219 (4 connections now open)
m31100| Thu Jun 14 01:31:37 [initandlisten] connection accepted from 10.255.119.66:43220 #6 (5 connections now open)
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync finishing up
m31100| Thu Jun 14 01:31:37 [conn6] end connection 10.255.119.66:43220 (4 connections now open)
m31101| Thu Jun 14 01:31:37 [rsSync] replSet set minValid=4fd97731:1
m31101| Thu Jun 14 01:31:37 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:31:37 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:31:37 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:31:37 [conn4] end connection 10.255.119.66:43218 (3 connections now open)
m31101| Thu Jun 14 01:31:38 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:38 [initandlisten] connection accepted from 10.255.119.66:43221 #7 (4 connections now open)
m31101| Thu Jun 14 01:31:38 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:38 [initandlisten] connection accepted from 10.255.119.66:43222 #8 (5 connections now open)
m31101| Thu Jun 14 01:31:38 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:31:39 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
{
"ts" : Timestamp(1339651889000, 1),
"h" : NumberLong("5580217503205818385"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd977311c4e40fcfe85bbee"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339651889000:1 and latest is 1339651889000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
ReplSetTest await synced=true
Thu Jun 14 01:31:39 starting new replica set monitor for replica set countSlaveOk-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:31:39 successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set countSlaveOk-rs0
Thu Jun 14 01:31:39 changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from countSlaveOk-rs0/
Thu Jun 14 01:31:39 trying to add new host domU-12-31-39-01-70-B4:31100 to replica set countSlaveOk-rs0
m31100| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:43223 #9 (6 connections now open)
Thu Jun 14 01:31:39 successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set countSlaveOk-rs0
Thu Jun 14 01:31:39 trying to add new host domU-12-31-39-01-70-B4:31101 to replica set countSlaveOk-rs0
Thu Jun 14 01:31:39 successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set countSlaveOk-rs0
m31100| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:43224 #10 (7 connections now open)
m31101| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:37600 #4 (4 connections now open)
m31100| Thu Jun 14 01:31:39 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Thu Jun 14 01:31:39 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:43226 #11 (8 connections now open)
m31100| Thu Jun 14 01:31:39 [conn9] end connection 10.255.119.66:43223 (7 connections now open)
Thu Jun 14 01:31:39 Primary for replica set countSlaveOk-rs0 changed to domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:31:39 replica set monitor for replica set countSlaveOk-rs0 started, address is countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:31:39 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:43228 #12 (8 connections now open)
Resetting db path '/data/db/countSlaveOk-config0'
m31101| Thu Jun 14 01:31:39 [initandlisten] connection accepted from 10.255.119.66:37602 #5 (5 connections now open)
Thu Jun 14 01:31:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/countSlaveOk-config0
m29000| Thu Jun 14 01:31:39
m29000| Thu Jun 14 01:31:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:31:39
m29000| Thu Jun 14 01:31:39 [initandlisten] MongoDB starting : pid=23928 port=29000 dbpath=/data/db/countSlaveOk-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:31:39 [initandlisten]
m29000| Thu Jun 14 01:31:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:31:39 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:31:39 [initandlisten]
m29000| Thu Jun 14 01:31:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:31:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:31:39 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:31:39 [initandlisten]
m29000| Thu Jun 14 01:31:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:31:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:31:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:31:39 [initandlisten] options: { dbpath: "/data/db/countSlaveOk-config0", port: 29000 }
m29000| Thu Jun 14 01:31:39 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:31:39 [websvr] admin web console waiting for connections on port 30000
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 127.0.0.1:37922 #1 (1 connection now open)
ShardingTest countSlaveOk :
{
"config" : "domU-12-31-39-01-70-B4:29000",
"shards" : [
connection to countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
]
}
Thu Jun 14 01:31:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:41892 #2 (2 connections now open)
m29000| Thu Jun 14 01:31:40 [FileAllocator] allocating new datafile /data/db/countSlaveOk-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:31:40 [FileAllocator] creating directory /data/db/countSlaveOk-config0/_tmp
m30999| Thu Jun 14 01:31:40 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:31:40 [mongosMain] MongoS version 2.1.2-pre- starting: pid=23942 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:31:40 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:31:40 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:31:40 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30999 }
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:41894 #3 (3 connections now open)
m29000| Thu Jun 14 01:31:40 [FileAllocator] done allocating datafile /data/db/countSlaveOk-config0/config.ns, size: 16MB, took 0.261 secs
m29000| Thu Jun 14 01:31:40 [FileAllocator] allocating new datafile /data/db/countSlaveOk-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:31:40 [FileAllocator] done allocating datafile /data/db/countSlaveOk-config0/config.0, size: 16MB, took 0.31 secs
m30999| Thu Jun 14 01:31:40 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:31:40 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:31:40 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:31:40 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:31:40 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:31:40
m30999| Thu Jun 14 01:31:40 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:31:40 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339651900:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:31:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651900:1804289383' acquired, ts : 4fd9773ccc3801d195b16964
m30999| Thu Jun 14 01:31:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651900:1804289383' unlocked.
m29000| Thu Jun 14 01:31:40 [FileAllocator] allocating new datafile /data/db/countSlaveOk-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:31:40 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn2] insert config.settings keyUpdates:0 locks(micros) w:586767 586ms
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:41897 #4 (4 connections now open)
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:41898 #5 (5 connections now open)
m29000| Thu Jun 14 01:31:40 [conn5] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:31:40 [conn4] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:31:40 [conn4] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn5] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:41899 #6 (6 connections now open)
m29000| Thu Jun 14 01:31:40 [conn4] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn4] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 1 total records. 0 secs
m29000| Thu Jun 14 01:31:40 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:31:40 [mongosMain] connection accepted from 127.0.0.1:43133 #1 (1 connection now open)
ShardingTest undefined going to add shard : countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:40 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:31:40 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:31:40 [conn] starting new replica set monitor for replica set countSlaveOk-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m29000| Thu Jun 14 01:31:40 [conn4] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:31:40 [conn4] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:43240 #13 (9 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:40 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from countSlaveOk-rs0/
m30999| Thu Jun 14 01:31:40 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set countSlaveOk-rs0
m31100| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:43241 #14 (10 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:40 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:40 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set countSlaveOk-rs0
m31101| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:37617 #6 (6 connections now open)
m31100| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:43243 #15 (11 connections now open)
m31100| Thu Jun 14 01:31:40 [conn13] end connection 10.255.119.66:43240 (10 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] Primary for replica set countSlaveOk-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:37619 #7 (7 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] replica set monitor for replica set countSlaveOk-rs0 started, address is countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:40 [ReplicaSetMonitorWatcher] starting
m30999| Thu Jun 14 01:31:40 [conn] going to add shard: { _id: "countSlaveOk-rs0", host: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }
m31100| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:43245 #16 (11 connections now open)
{ "shardAdded" : "countSlaveOk-rs0", "ok" : 1 }
m30999| Thu Jun 14 01:31:40 [mongosMain] connection accepted from 10.255.119.66:41099 #2 (2 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:31:40 [conn] best shard for new allocation is shard: countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101 mapped: 112 writeLock: 0
m30999| Thu Jun 14 01:31:40 [conn] put [test] on: countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.$cmd msg id:50 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] single query: test.$cmd { drop: "countSlaveOk" } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:40 [conn] DROP: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:40 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd9773ccc3801d195b16963
m30999| Thu Jun 14 01:31:40 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd9773ccc3801d195b16963
m30999| Thu Jun 14 01:31:40 [conn] initializing shard connection to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:40 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd9773ccc3801d195b16963'), authoritative: true }
m31100| Thu Jun 14 01:31:40 [initandlisten] connection accepted from 10.255.119.66:43247 #17 (12 connections now open)
m30999| Thu Jun 14 01:31:40 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:40 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m31100| Thu Jun 14 01:31:40 [conn17] CMD: drop test.countSlaveOk
m31100| Thu Jun 14 01:31:40 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:51 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:40 [mongosMain] connection accepted from 10.255.119.66:41101 #3 (3 connections now open)
m30999| Thu Jun 14 01:31:40 [mongosMain] connection accepted from 10.255.119.66:41102 #4 (4 connections now open)
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:52 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:53 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:54 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:55 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:56 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:57 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:58 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:59 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:60 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:61 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:62 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:63 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:64 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:65 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:66 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:67 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:68 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:69 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:70 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:71 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:72 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:73 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:74 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:75 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:76 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:77 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:78 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:79 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:80 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:81 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:82 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:83 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:84 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:85 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:86 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:87 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:88 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:89 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:90 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:91 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:92 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:93 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:94 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:95 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:96 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:97 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:98 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:99 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:100 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:101 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:102 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:103 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:104 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:105 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:106 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:107 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:108 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:109 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:110 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:111 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:112 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:113 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:114 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:115 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:116 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:117 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:118 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:119 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:120 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:121 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:122 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:123 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:124 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:125 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:126 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:127 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:128 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:129 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:130 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:131 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:132 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:133 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:134 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:135 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:136 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:137 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:138 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:139 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:140 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:141 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:142 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:143 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:144 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:145 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:146 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:147 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:148 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:149 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:150 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:151 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:152 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:153 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:154 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:155 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:156 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:157 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:158 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:159 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:160 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:161 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:162 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:163 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:164 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:165 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:166 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:167 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:168 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:169 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:170 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:171 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:172 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:173 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:174 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:175 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:176 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:177 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:178 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:179 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:180 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:181 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:182 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:183 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:184 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:185 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:186 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:187 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:188 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:189 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:190 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:191 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:192 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:193 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:194 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:195 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:196 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:197 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:198 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:199 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:200 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:201 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:202 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:203 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:204 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:205 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:206 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:207 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:208 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:209 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:210 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:211 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:212 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:213 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:214 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:215 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:216 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:217 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:218 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:219 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:220 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:221 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:222 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:223 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:224 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:225 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:226 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:227 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:228 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:229 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:230 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:231 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:232 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:233 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:234 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:235 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:236 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:237 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:238 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:239 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:240 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:241 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:242 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:243 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:244 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:245 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:246 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:247 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:248 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:249 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:250 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:251 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:252 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:253 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:254 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:255 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:256 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:257 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:258 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:259 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:260 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:261 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:262 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:263 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:264 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:265 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:266 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:267 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:268 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:269 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:270 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:271 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:272 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:273 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:274 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:275 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:276 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:277 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:278 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:279 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:280 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:281 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:282 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:283 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:284 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:285 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:286 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:287 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:288 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:289 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:290 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:291 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:292 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:293 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:294 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:295 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:296 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:297 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:298 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:299 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:300 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:301 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:302 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:303 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:304 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:305 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:306 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:307 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:308 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:309 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:310 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:311 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:312 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:313 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:314 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:315 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:316 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:317 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:318 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:319 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:320 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:321 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:322 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:323 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:324 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:325 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:326 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:327 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:328 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:329 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:330 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:331 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:332 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:333 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:334 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:335 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:336 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:337 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:338 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:339 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:340 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:341 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:342 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:343 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:344 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:345 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:346 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:347 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:348 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:349 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.countSlaveOk msg id:350 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] write: test.countSlaveOk
m30999| Thu Jun 14 01:31:40 [conn] Request::process ns: test.$cmd msg id:351 attempt: 0
m30999| Thu Jun 14 01:31:40 [conn] single query: test.$cmd { getlasterror: 1.0 } ntoreturn: -1 options : 0
m29000| Thu Jun 14 01:31:41 [FileAllocator] done allocating datafile /data/db/countSlaveOk-config0/config.1, size: 32MB, took 0.607 secs
m31100| Thu Jun 14 01:31:41 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/test.ns, size: 16MB, took 0.789 secs
m31100| Thu Jun 14 01:31:41 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-0/test.0, filling with zeroes...
m31100| Thu Jun 14 01:31:41 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-0/test.0, size: 16MB, took 0.276 secs
m31100| Thu Jun 14 01:31:41 [conn17] build index test.countSlaveOk { _id: 1 }
m31100| Thu Jun 14 01:31:41 [conn17] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:31:41 [conn17] insert test.countSlaveOk keyUpdates:0 locks(micros) W:404 w:1075721 1075ms
m31101| Thu Jun 14 01:31:41 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:31:41 [conn] Request::process ns: config.version msg id:352 attempt: 0
m30999| Thu Jun 14 01:31:41 [conn] shard query: config.version {}
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 BackgroundJob starting: ConnectBG
m31100| Thu Jun 14 01:31:41 [initandlisten] connection accepted from 10.255.119.66:43250 #18 (13 connections now open)
m30999| Thu Jun 14 01:31:41 [conn] initializing shard connection to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:41 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd9773ccc3801d195b16963'), authoritative: true }
m30999| Thu Jun 14 01:31:41 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m30999| Thu Jun 14 01:31:41 [conn] creating new connection to:domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:31:41 [initandlisten] connection accepted from 10.255.119.66:41912 #7 (7 connections now open)
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "countSlaveOk-rs0", "host" : "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "countSlaveOk-rs0" }
m30999| Thu Jun 14 01:31:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:41 [conn] connected connection!
m30999| Thu Jun 14 01:31:41 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd9773ccc3801d195b16963
m30999| Thu Jun 14 01:31:41 [conn] initializing shard connection to domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:31:41 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd9773ccc3801d195b16963'), authoritative: true }
m30999| Thu Jun 14 01:31:41 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:31:41 [WriteBackListener-domU-12-31-39-01-70-B4:29000] domU-12-31-39-01-70-B4:29000 is not a shard node
m30999| Thu Jun 14 01:31:41 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] Request::process ns: config.version msg id:353 attempt: 0
m30999| Thu Jun 14 01:31:41 [conn] shard query: config.version {}
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] Request::process ns: config.shards msg id:354 attempt: 0
m30999| Thu Jun 14 01:31:41 [conn] shard query: config.shards { query: {}, orderby: { _id: 1.0 } }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "countSlaveOk-rs0", host: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] Request::process ns: config.databases msg id:355 attempt: 0
m30999| Thu Jun 14 01:31:41 [conn] shard query: config.databases { query: {}, orderby: { name: 1.0 } }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:41 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
ReplSetTest Timestamp(1339651901000, 300)
{
"ts" : Timestamp(1339651889000, 1),
"h" : NumberLong("5580217503205818385"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd977311c4e40fcfe85bbee"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339651889000:1 and latest is 1339651901000:300
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
m31101| Thu Jun 14 01:31:42 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/test.ns, size: 16MB, took 0.31 secs
m31101| Thu Jun 14 01:31:42 [FileAllocator] allocating new datafile /data/db/countSlaveOk-rs0-1/test.0, filling with zeroes...
m31101| Thu Jun 14 01:31:42 [FileAllocator] done allocating datafile /data/db/countSlaveOk-rs0-1/test.0, size: 16MB, took 0.243 secs
m31101| Thu Jun 14 01:31:42 [rsSync] build index test.countSlaveOk { _id: 1 }
m31101| Thu Jun 14 01:31:42 [rsSync] build index done. scanned 0 total records. 0 secs
{
"ts" : Timestamp(1339651901000, 300),
"h" : NumberLong("4587030574078038783"),
"op" : "i",
"ns" : "test.countSlaveOk",
"o" : {
"_id" : ObjectId("4fd9773c1c4e40fcfe85bd1c"),
"i" : 9
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339651901000:300 and latest is 1339651901000:300
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 301
ReplSetTest await synced=true
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:31:43 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:31:43 [interruptThread] now exiting
m31100| Thu Jun 14 01:31:43 dbexit:
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:31:43 [interruptThread] closing listening socket: 16
m31100| Thu Jun 14 01:31:43 [interruptThread] closing listening socket: 17
m31100| Thu Jun 14 01:31:43 [interruptThread] closing listening socket: 18
m31100| Thu Jun 14 01:31:43 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:31:43 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:31:43 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:31:43 [conn1] end connection 10.255.119.66:43211 (11 connections now open)
m31101| Thu Jun 14 01:31:43 [conn3] end connection 10.255.119.66:37591 (6 connections now open)
m31101| Thu Jun 14 01:31:43 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:31:43 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:31:43 dbexit: really exiting now
m30999| Thu Jun 14 01:31:43 [WriteBackListener-domU-12-31-39-01-70-B4:31100] Socket recv() conn closed? 10.255.119.66:31100
m30999| Thu Jun 14 01:31:43 [WriteBackListener-domU-12-31-39-01-70-B4:31100] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:31:43 [WriteBackListener-domU-12-31-39-01-70-B4:31100] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:31:43 [WriteBackListener-domU-12-31-39-01-70-B4:31100] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd9773ccc3801d195b16963') }
m30999| Thu Jun 14 01:31:43 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd9773ccc3801d195b16963') }
Thu Jun 14 01:31:44 shell: stopped mongo program on port 31100
Thu Jun 14 01:31:44 DBClientCursor::init call() failed
Thu Jun 14 01:31:44 query failed : admin.$cmd { ismaster: 1.0 } to: 127.0.0.1:31100
ReplSetTest Could not call ismaster on node 0
{
"set" : "countSlaveOk-rs0",
"date" : ISODate("2012-06-14T05:31:44Z"),
"myState" : 2,
"syncingTo" : "domU-12-31-39-01-70-B4:31100",
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 23,
"optime" : Timestamp(1339651901000, 300),
"optimeDate" : ISODate("2012-06-14T05:31:41Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:31:43Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 35,
"optime" : Timestamp(1339651901000, 300),
"optimeDate" : ISODate("2012-06-14T05:31:41Z"),
"errmsg" : "db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100",
"self" : true
}
],
"ok" : 1
}
Awaiting domU-12-31-39-01-70-B4:31101 to be { "ok" : true, "secondary" : true } for connection to domU-12-31-39-01-70-B4:30999 (rs: undefined)
{
"countSlaveOk-rs0" : {
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
}
Awaiting domU-12-31-39-01-70-B4:31100 to be { "ok" : false } for connection to domU-12-31-39-01-70-B4:30999 (rs: undefined)
{
"countSlaveOk-rs0" : {
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
}
m30999| Thu Jun 14 01:31:44 [conn] Request::process ns: admin.$cmd msg id:376 attempt: 0
m30999| Thu Jun 14 01:31:44 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:44 [conn] Request::process ns: admin.$cmd msg id:377 attempt: 0
m30999| Thu Jun 14 01:31:44 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:44 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:44 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:44 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : socket exception
m30999| Thu Jun 14 01:31:45 [conn] Request::process ns: admin.$cmd msg id:378 attempt: 0
m30999| Thu Jun 14 01:31:45 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:45 [conn] Request::process ns: admin.$cmd msg id:379 attempt: 0
m30999| Thu Jun 14 01:31:45 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m31101| Thu Jun 14 01:31:45 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:31:45 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "countSlaveOk-rs0", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:31:45 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:31:45 [rsMgr] replSet can't see a majority, will not try to elect self
m30999| Thu Jun 14 01:31:45 [conn] Request::process ns: admin.$cmd msg id:380 attempt: 0
m30999| Thu Jun 14 01:31:45 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:45 [conn] Request::process ns: admin.$cmd msg id:381 attempt: 0
m30999| Thu Jun 14 01:31:45 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:45 [conn] Request::process ns: admin.$cmd msg id:382 attempt: 0
m30999| Thu Jun 14 01:31:45 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:46 [conn] Request::process ns: admin.$cmd msg id:383 attempt: 0
m30999| Thu Jun 14 01:31:46 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:46 [conn] Request::process ns: admin.$cmd msg id:384 attempt: 0
m30999| Thu Jun 14 01:31:46 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:46 [conn] Request::process ns: admin.$cmd msg id:385 attempt: 0
m30999| Thu Jun 14 01:31:46 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:46 [conn] Request::process ns: admin.$cmd msg id:386 attempt: 0
m30999| Thu Jun 14 01:31:46 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:46 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:46 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:46 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : socket exception
{
"countSlaveOk-rs0" : {
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
}
m30999| Thu Jun 14 01:31:46 [conn] Request::process ns: admin.$cmd msg id:387 attempt: 0
m30999| Thu Jun 14 01:31:46 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:47 [conn] Request::process ns: admin.$cmd msg id:388 attempt: 0
m30999| Thu Jun 14 01:31:47 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:47 [conn] Request::process ns: admin.$cmd msg id:389 attempt: 0
m30999| Thu Jun 14 01:31:47 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:47 [conn] Request::process ns: admin.$cmd msg id:390 attempt: 0
m30999| Thu Jun 14 01:31:47 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:47 [conn] Request::process ns: admin.$cmd msg id:391 attempt: 0
m30999| Thu Jun 14 01:31:47 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:48 [conn] Request::process ns: admin.$cmd msg id:392 attempt: 0
m30999| Thu Jun 14 01:31:48 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:48 [conn] Request::process ns: admin.$cmd msg id:393 attempt: 0
m30999| Thu Jun 14 01:31:48 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:48 [conn] Request::process ns: admin.$cmd msg id:394 attempt: 0
m30999| Thu Jun 14 01:31:48 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:48 [conn] Request::process ns: admin.$cmd msg id:395 attempt: 0
m30999| Thu Jun 14 01:31:48 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:48 [conn] Request::process ns: admin.$cmd msg id:396 attempt: 0
m30999| Thu Jun 14 01:31:48 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
{
"countSlaveOk-rs0" : {
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
}
m30999| Thu Jun 14 01:31:49 [conn] Request::process ns: admin.$cmd msg id:397 attempt: 0
m30999| Thu Jun 14 01:31:49 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:49 [conn] Request::process ns: admin.$cmd msg id:398 attempt: 0
m30999| Thu Jun 14 01:31:49 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m31101| Thu Jun 14 01:31:49 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:31:49 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:49 [conn] Request::process ns: admin.$cmd msg id:399 attempt: 0
m30999| Thu Jun 14 01:31:49 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:49 [conn] Request::process ns: admin.$cmd msg id:400 attempt: 0
m30999| Thu Jun 14 01:31:49 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:49 [conn] Request::process ns: admin.$cmd msg id:401 attempt: 0
m30999| Thu Jun 14 01:31:49 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
Thu Jun 14 01:31:49 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:31:49 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:31:49 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 failed couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:49 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:49 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:49 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : socket exception
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: admin.$cmd msg id:405 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: admin.$cmd msg id:406 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: admin.$cmd msg id:407 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: admin.$cmd msg id:408 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:50 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:31:50 [Balancer] Socket recv() conn closed? 10.255.119.66:31100
m30999| Thu Jun 14 01:31:50 [Balancer] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:31:50 [Balancer] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:31:50 [Balancer] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { features: 1 }
m30999| Thu Jun 14 01:31:50 [Balancer] scoped connection to domU-12-31-39-01-70-B4:29000 not being returned to the pool
m30999| Thu Jun 14 01:31:50 [Balancer] caught exception while doing balance: DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { features: 1 }
m30999| Thu Jun 14 01:31:50 [Balancer] *** End of balancing round
m29000| Thu Jun 14 01:31:50 [conn5] end connection 10.255.119.66:41898 (6 connections now open)
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] checking replica set: countSlaveOk-rs0
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] Socket recv() conn closed? 10.255.119.66:31100
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { ismaster: 1 }
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { ismaster: 1 }
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] _check : countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:50 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 failed couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651910801), ok: 1.0 }
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] creating new connection to:domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:50 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] connected connection!
m31101| Thu Jun 14 01:31:50 [initandlisten] connection accepted from 10.255.119.66:37636 #8 (7 connections now open)
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: admin.$cmd msg id:409 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:50 [conn] Request::process ns: test.$cmd msg id:410 attempt: 0
m30999| Thu Jun 14 01:31:50 [conn] single query: test.$cmd { count: "countSlaveOk", query: { i: 0.0 }, fields: {} } ntoreturn: -1 options : 4
m30999| Thu Jun 14 01:31:50 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 4, query: { count: "countSlaveOk", query: { i: 0.0 } }, fields: {} } and CInfo { v_ns: "test.countSlaveOk", filter: { i: 0.0 } }
m30999| Thu Jun 14 01:31:50 [conn] [pcursor] initializing over 1 shards required by [unsharded @ countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101]
m30999| Thu Jun 14 01:31:50 [conn] [pcursor] initializing on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:50 [conn] _check : countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:50 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:50 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651910864), ok: 1.0 }
m30999| Thu Jun 14 01:31:50 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:50 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
Thu Jun 14 01:31:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651910916), ok: 1.0 }
m31101| Thu Jun 14 01:31:51 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:51 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:51 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651911804), ok: 1.0 }
m30999| Thu Jun 14 01:31:51 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:51 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:51 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:51 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651911868), ok: 1.0 }
m30999| Thu Jun 14 01:31:51 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:51 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:31:51 [ReplicaSetMonitorWatcher] warning: No primary detected for set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:52 [ReplicaSetMonitorWatcher] warning: No primary detected for set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:52 [conn] warning: No primary detected for set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:52 [conn] User Assertion: 10009:ReplicaSetMonitor no master found for set: countSlaveOk-rs0
m30999| Thu Jun 14 01:31:52 [conn] slave ':27017' is not initialized or invalid
m30999| Thu Jun 14 01:31:52 [conn] dbclient_rs getSlave countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:52 [conn] dbclient_rs getSlave found local secondary for queries: 1, ping time: 0
m30999| Thu Jun 14 01:31:52 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] initialized command (lazily) on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] finishing on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] finished on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "(done)", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: { n: 30.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:31:52 [conn] Request::process ns: test.countSlaveOk msg id:413 attempt: 0
m31101| Thu Jun 14 01:31:52 [initandlisten] connection accepted from 10.255.119.66:37639 #9 (8 connections now open)
m30999| Thu Jun 14 01:31:52 [conn] shard query: test.countSlaveOk { i: 0.0 }
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] creating pcursor over QSpec { ns: "test.countSlaveOk", n2skip: 0, n2return: 0, options: 4, query: { i: 0.0 }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] initializing over 1 shards required by [unsharded @ countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101]
m30999| Thu Jun 14 01:31:52 [conn] [pcursor] initializing on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:52 [conn] _check : countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:52 [conn] trying reconnect to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:52 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:52 [conn] reconnect domU-12-31-39-01-70-B4:31100 failed couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:52 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:52 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651912874), ok: 1.0 }
m30999| Thu Jun 14 01:31:52 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:52 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:31:53 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:53 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:53 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651913876), ok: 1.0 }
m30999| Thu Jun 14 01:31:53 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:53 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:53 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:53 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:53 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : socket exception
m30999| Thu Jun 14 01:31:54 [conn] warning: No primary detected for set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:54 [conn] User Assertion: 10009:ReplicaSetMonitor no master found for set: countSlaveOk-rs0
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] initialized query (lazily) on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] finishing on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] finished on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "(done)", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: { _id: ObjectId('4fd9773c1c4e40fcfe85bbf1'), i: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:31:54 [conn] Request::process ns: test.$cmd msg id:414 attempt: 0
m30999| Thu Jun 14 01:31:54 [conn] single query: test.$cmd { distinct: "countSlaveOk", key: "i", query: {} } ntoreturn: -1 options : 4
m30999| Thu Jun 14 01:31:54 [conn] Request::process ns: test.$cmd msg id:415 attempt: 0
m30999| Thu Jun 14 01:31:54 [conn] single query: test.$cmd { count: "countSlaveOk", query: { i: 0.0 }, fields: {} } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "countSlaveOk", query: { i: 0.0 } }, fields: {} } and CInfo { v_ns: "test.countSlaveOk", filter: { i: 0.0 } }
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] initializing over 1 shards required by [unsharded @ countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101]
m30999| Thu Jun 14 01:31:54 [conn] [pcursor] initializing on shard countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:31:54 [conn] _check : countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:54 [conn] trying reconnect to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:54 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:31:54 [conn] reconnect domU-12-31-39-01-70-B4:31100 failed couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:54 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:54 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651914912), ok: 1.0 }
m30999| Thu Jun 14 01:31:54 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:54 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:31:55 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:55 [conn] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
m30999| Thu Jun 14 01:31:55 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "countSlaveOk-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339651915916), ok: 1.0 }
m30999| Thu Jun 14 01:31:55 [conn] dbclient_rs nodes[0].ok = false domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:31:55 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:31:56 [conn] warning: No primary detected for set countSlaveOk-rs0
m30999| Thu Jun 14 01:31:56 [conn] User Assertion: 10009:ReplicaSetMonitor no master found for set: countSlaveOk-rs0
Non-slaveOk'd connection failed.
m30999| Thu Jun 14 01:31:56 [conn] warning: db exception when initializing on countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101, current connection state is { state: { conn: "countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", vinfo: "countSlaveOk-rs0:countSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101", cursor: "(none)", count: 0, done: false }, retryNext: false, init: false, finish: false, errored: false } :: caused by :: 10009 ReplicaSetMonitor no master found for set: countSlaveOk-rs0
m30999| Thu Jun 14 01:31:56 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31101| Thu Jun 14 01:31:56 [conn9] end connection 10.255.119.66:37639 (7 connections now open)
m29000| Thu Jun 14 01:31:56 [conn3] end connection 10.255.119.66:41894 (5 connections now open)
m29000| Thu Jun 14 01:31:56 [conn4] end connection 10.255.119.66:41897 (5 connections now open)
m31101| Thu Jun 14 01:31:56 [conn8] end connection 10.255.119.66:37636 (6 connections now open)
m31101| Thu Jun 14 01:31:56 [conn6] end connection 10.255.119.66:37617 (5 connections now open)
m29000| Thu Jun 14 01:31:56 [conn7] end connection 10.255.119.66:41912 (3 connections now open)
m29000| Thu Jun 14 01:31:56 [conn6] end connection 10.255.119.66:41899 (3 connections now open)
m31101| Thu Jun 14 01:31:57 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:31:57 shell: stopped mongo program on port 30999
Thu Jun 14 01:31:57 No db started on port: 30000
Thu Jun 14 01:31:57 shell: stopped mongo program on port 30000
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
Thu Jun 14 01:31:57 No db started on port: 31100
Thu Jun 14 01:31:57 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:31:57 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:31:57 [interruptThread] now exiting
m31101| Thu Jun 14 01:31:57 dbexit:
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:31:57 [interruptThread] closing listening socket: 18
m31101| Thu Jun 14 01:31:57 [interruptThread] closing listening socket: 19
m31101| Thu Jun 14 01:31:57 [interruptThread] closing listening socket: 21
m31101| Thu Jun 14 01:31:57 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:31:57 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:31:57 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:31:57 dbexit: really exiting now
Thu Jun 14 01:31:58 shell: stopped mongo program on port 31101
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:31:58 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:31:58 [interruptThread] now exiting
m29000| Thu Jun 14 01:31:58 dbexit:
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:31:58 [interruptThread] closing listening socket: 26
m29000| Thu Jun 14 01:31:58 [interruptThread] closing listening socket: 27
m29000| Thu Jun 14 01:31:58 [interruptThread] closing listening socket: 28
m29000| Thu Jun 14 01:31:58 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:31:58 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:31:58 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:31:58 dbexit: really exiting now
Thu Jun 14 01:31:59 shell: stopped mongo program on port 29000
*** ShardingTest countSlaveOk completed successfully in 50.97 seconds ***
51022.203922ms
Thu Jun 14 01:32:00 [initandlisten] connection accepted from 127.0.0.1:54692 #20 (7 connections now open)
*******************************************
Test : cursor1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/cursor1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/cursor1.js";TestData.testFile = "cursor1.js";TestData.testName = "cursor1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:32:00 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/sharding_cursor10'
Thu Jun 14 01:32:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/sharding_cursor10
m30000| Thu Jun 14 01:32:00
m30000| Thu Jun 14 01:32:00 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:32:00
m30000| Thu Jun 14 01:32:00 [initandlisten] MongoDB starting : pid=24021 port=30000 dbpath=/data/db/sharding_cursor10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:32:00 [initandlisten]
m30000| Thu Jun 14 01:32:00 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:32:00 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:32:00 [initandlisten]
m30000| Thu Jun 14 01:32:00 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:32:00 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:32:00 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:32:00 [initandlisten]
m30000| Thu Jun 14 01:32:00 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:32:00 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:32:00 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:32:00 [initandlisten] options: { dbpath: "/data/db/sharding_cursor10", port: 30000 }
m30000| Thu Jun 14 01:32:00 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:32:00 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/sharding_cursor11'
m30000| Thu Jun 14 01:32:00 [initandlisten] connection accepted from 127.0.0.1:51191 #1 (1 connection now open)
Thu Jun 14 01:32:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/sharding_cursor11
m30001| Thu Jun 14 01:32:00
m30001| Thu Jun 14 01:32:00 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:32:00
m30001| Thu Jun 14 01:32:00 [initandlisten] MongoDB starting : pid=24034 port=30001 dbpath=/data/db/sharding_cursor11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:32:00 [initandlisten]
m30001| Thu Jun 14 01:32:00 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:32:00 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:32:00 [initandlisten]
m30001| Thu Jun 14 01:32:00 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:32:00 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:32:00 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:32:00 [initandlisten]
m30001| Thu Jun 14 01:32:00 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:32:00 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:32:00 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:32:00 [initandlisten] options: { dbpath: "/data/db/sharding_cursor11", port: 30001 }
m30001| Thu Jun 14 01:32:00 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:32:00 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
ShardingTest sharding_cursor1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:32:00 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -vv
m30001| Thu Jun 14 01:32:00 [initandlisten] connection accepted from 127.0.0.1:42600 #1 (1 connection now open)
m30000| Thu Jun 14 01:32:00 [initandlisten] connection accepted from 127.0.0.1:51194 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:00 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:00 [FileAllocator] creating directory /data/db/sharding_cursor10/_tmp
m30999| Thu Jun 14 01:32:00 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:32:00 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24048 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:32:00 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:32:00 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:32:00 [mongosMain] options: { configdb: "localhost:30000", port: 30999, vv: true }
m30999| Thu Jun 14 01:32:00 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:32:00 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:00 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:00 [mongosMain] connected connection!
m30000| Thu Jun 14 01:32:00 [initandlisten] connection accepted from 127.0.0.1:51196 #3 (3 connections now open)
m30000| Thu Jun 14 01:32:00 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/config.ns, size: 16MB, took 0.241 secs
m30000| Thu Jun 14 01:32:00 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:32:01 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/config.0, size: 16MB, took 0.303 secs
m30000| Thu Jun 14 01:32:01 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:32:01 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn2] insert config.settings keyUpdates:0 locks(micros) w:556811 556ms
m30000| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:51199 #4 (4 connections now open)
m30000| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:51200 #5 (5 connections now open)
m30000| Thu Jun 14 01:32:01 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:32:01 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:51201 #6 (6 connections now open)
m30000| Thu Jun 14 01:32:01 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn3] info: creating collection config.shards on add index
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:32:01 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:32:01 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [mongosMain] connected connection!
m30999| Thu Jun 14 01:32:01 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:32:01 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:32:01 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:32:01 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:32:01 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:32:01 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:32:01 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:32:01 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:32:01
m30999| Thu Jun 14 01:32:01 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:32:01 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [Balancer] connected connection!
m30999| Thu Jun 14 01:32:01 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:32:01 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:32:01 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:32:01 [Balancer] skew from remote server localhost:30000 found: -1
m30999| Thu Jun 14 01:32:01 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds.
m30999| Thu Jun 14 01:32:01 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:32:01 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651921:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339651921:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339651921:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:32:01 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977518b3e1019098ebc0f" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:32:01 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651921:1804289383' acquired, ts : 4fd977518b3e1019098ebc0f
m30999| Thu Jun 14 01:32:01 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:32:01 [Balancer] no collections to balance
m30999| Thu Jun 14 01:32:01 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:32:01 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:32:01 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651921:1804289383' unlocked.
m30999| Thu Jun 14 01:32:01 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651921:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:32:01 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:32:01 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651921:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:32:01 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:32:01 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:01 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:32:01 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:32:01 [mongosMain] connection accepted from 127.0.0.1:43181 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:32:01 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:32:01 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:32:01 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30000| Thu Jun 14 01:32:01 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn4] build index done. scanned 0 total records. 0 secs
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:32:01 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [conn] connected connection!
m30999| Thu Jun 14 01:32:01 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:32:01 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [conn] connected connection!
m30999| Thu Jun 14 01:32:01 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977518b3e1019098ebc0e
m30999| Thu Jun 14 01:32:01 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:32:01 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977518b3e1019098ebc0e'), authoritative: true }
m30999| Thu Jun 14 01:32:01 [conn] creating new connection to:localhost:30001
m30000| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:51204 #7 (7 connections now open)
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [conn] connected connection!
m30999| Thu Jun 14 01:32:01 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977518b3e1019098ebc0e
m30999| Thu Jun 14 01:32:01 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:32:01 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977518b3e1019098ebc0e'), authoritative: true }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
{ "_id" : "chunksize", "value" : 50 }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "chunksize", value: 50.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "_id" : "balancer", "stopped" : true }
m30999| Thu Jun 14 01:32:01 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:32:01 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:32:01 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:32:01 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:32:01 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:32:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:32:01 [conn] connected connection!
m30999| Thu Jun 14 01:32:01 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:01 [conn] enable sharding on: test.foo with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:01 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:01 [conn] loaded 1 chunks into new chunk manager for test.foo with version 1|0||4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:01 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd977518b3e1019098ebc10 based on: (empty)
m30999| Thu Jun 14 01:32:01 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:32:01 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x9002838
m30999| Thu Jun 14 01:32:01 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ffe050
m30999| Thu Jun 14 01:32:01 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:32:01 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd977518b3e1019098ebc10 manager: 0x9002838
m30999| Thu Jun 14 01:32:01 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), shard: "shard0001", shardHost: "localhost:30001" } 0x8fff880
m30000| Thu Jun 14 01:32:01 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:32:01 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:42610 #2 (2 connections now open)
m30001| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:42612 #3 (3 connections now open)
m30001| Thu Jun 14 01:32:01 [initandlisten] connection accepted from 127.0.0.1:42613 #4 (4 connections now open)
m30001| Thu Jun 14 01:32:01 [FileAllocator] allocating new datafile /data/db/sharding_cursor11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:01 [FileAllocator] creating directory /data/db/sharding_cursor11/_tmp
m30000| Thu Jun 14 01:32:01 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/config.1, size: 32MB, took 0.608 secs
m30001| Thu Jun 14 01:32:02 [FileAllocator] done allocating datafile /data/db/sharding_cursor11/test.ns, size: 16MB, took 0.342 secs
m30001| Thu Jun 14 01:32:02 [FileAllocator] allocating new datafile /data/db/sharding_cursor11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:32:02 [FileAllocator] done allocating datafile /data/db/sharding_cursor11/test.0, size: 16MB, took 0.272 secs
m30001| Thu Jun 14 01:32:02 [conn4] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:32:02 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:02 [conn4] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:32:02 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:101 r:282 w:1166164 1166ms
m30001| Thu Jun 14 01:32:02 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:67 reslen:173 1164ms
m30001| Thu Jun 14 01:32:02 [FileAllocator] allocating new datafile /data/db/sharding_cursor11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:32:02 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:32:02 [initandlisten] connection accepted from 127.0.0.1:51207 #8 (8 connections now open)
m30999| Thu Jun 14 01:32:02 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:32:02 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd977518b3e1019098ebc10 manager: 0x9002838
m30999| Thu Jun 14 01:32:02 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8fff880
m30999| Thu Jun 14 01:32:02 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 7396451 splitThreshold: 921
m30999| Thu Jun 14 01:32:02 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 3, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: { _id: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 5, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: { _id: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 7, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||4fd977518b3e1019098ebc10", cursor: { _id: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:02 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:32:02 [initandlisten] connection accepted from 127.0.0.1:51208 #9 (9 connections now open)
m30001| Thu Jun 14 01:32:02 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 5.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:02 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:02 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651922:1361423573 (sleeping for 30000ms)
m30001| Thu Jun 14 01:32:02 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651922:1361423573' acquired, ts : 4fd97752a1927bb9b63a4597
m30001| Thu Jun 14 01:32:02 [conn4] splitChunk accepted at version 1|0||4fd977518b3e1019098ebc10
m30001| Thu Jun 14 01:32:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:02-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42613", time: new Date(1339651922327), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 5.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977518b3e1019098ebc10') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977518b3e1019098ebc10') } } }
m30001| Thu Jun 14 01:32:02 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651922:1361423573' unlocked.
m30999| Thu Jun 14 01:32:02 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||4fd977518b3e1019098ebc10 and 1 chunks
m30999| Thu Jun 14 01:32:02 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|2||4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:02 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd977518b3e1019098ebc10 based on: 1|0||4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:02 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 5.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:02 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 5.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:02 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 5.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-_id_5.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:02 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:02 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651922:1361423573' acquired, ts : 4fd97752a1927bb9b63a4598
m30001| Thu Jun 14 01:32:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:02-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42613", time: new Date(1339651922331), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:02 [conn4] moveChunk request accepted at version 1|2||4fd977518b3e1019098ebc10
m30001| Thu Jun 14 01:32:02 [conn4] moveChunk number of documents: 5
m30001| Thu Jun 14 01:32:02 [initandlisten] connection accepted from 127.0.0.1:42616 #5 (5 connections now open)
m30000| Thu Jun 14 01:32:02 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:03 [FileAllocator] done allocating datafile /data/db/sharding_cursor11/test.1, size: 32MB, took 0.912 secs
m30000| Thu Jun 14 01:32:03 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/test.ns, size: 16MB, took 0.879 secs
m30000| Thu Jun 14 01:32:03 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/test.0, filling with zeroes...
m30001| Thu Jun 14 01:32:03 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:32:03 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/test.0, size: 16MB, took 0.258 secs
m30000| Thu Jun 14 01:32:03 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:32:03 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:03 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:32:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:03 [FileAllocator] allocating new datafile /data/db/sharding_cursor10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:32:04 [FileAllocator] done allocating datafile /data/db/sharding_cursor10/test.1, size: 32MB, took 0.56 secs
m30001| Thu Jun 14 01:32:04 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 90, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:04 [conn4] moveChunk setting version to: 2|0||4fd977518b3e1019098ebc10
m30000| Thu Jun 14 01:32:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:04 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:04-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651924352), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, step1 of 5: 1148, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 870 } }
m30000| Thu Jun 14 01:32:04 [initandlisten] connection accepted from 127.0.0.1:51210 #10 (10 connections now open)
m30999| Thu Jun 14 01:32:04 [conn] moveChunk result: { ok: 1.0 }
m30001| Thu Jun 14 01:32:04 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 5.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 90, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:04 [conn4] moveChunk updating self version to: 2|1||4fd977518b3e1019098ebc10 through { _id: MinKey } -> { _id: 5.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:32:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:04-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42613", time: new Date(1339651924357), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:04 [conn4] forking for cleaning up chunk data
m30001| Thu Jun 14 01:32:04 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339651922:1361423573' unlocked.
m30001| Thu Jun 14 01:32:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:04-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42613", time: new Date(1339651924357), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 2007, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:32:04 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 5.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-_id_5.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:101 r:559 w:1166186 reslen:37 2027ms
m30001| Thu Jun 14 01:32:04 [cleanupOldData] (start) waiting to cleanup test.foo from { _id: 5.0 } -> { _id: MaxKey } # cursors:2
m30999| Thu Jun 14 01:32:04 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|2||4fd977518b3e1019098ebc10 and 2 chunks
m30999| Thu Jun 14 01:32:04 [conn] loaded 2 chunks into new chunk manager for test.foo with version 2|1||4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:04 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||4fd977518b3e1019098ebc10 based on: 1|2||4fd977518b3e1019098ebc10
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 2, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initializing over 2 shards required by [test.foo @ 2|1||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 2 current: 4 version: 2|0||4fd977518b3e1019098ebc10 manager: 0x90067f8
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ffe050
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:32:04 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 2 current: 4 version: 2|0||4fd977518b3e1019098ebc10 manager: 0x90067f8
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ffe050
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] needed to set remote version on connection to value compatible with [test.foo @ 2|1||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 2 current: 4 version: 2|1||4fd977518b3e1019098ebc10 manager: 0x90067f8
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977518b3e1019098ebc10'), serverID: ObjectId('4fd977518b3e1019098ebc0e'), shard: "shard0001", shardHost: "localhost:30001" } 0x8fff880
m30999| Thu Jun 14 01:32:04 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977518b3e1019098ebc10'), ok: 1.0 }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] needed to set remote version on connection to value compatible with [test.foo @ 2|1||4fd977518b3e1019098ebc10]
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: { _id: 5.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:32:04 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 2|1||4fd977518b3e1019098ebc10", cursor: { _id: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30000| Thu Jun 14 01:32:04 [conn7] no current chunk manager found for this shard, will initialize
{
"sharded" : 1,
"shardedEver" : 4,
"refs" : 0,
"totalOpen" : 1,
"ok" : 1
}
m30001| Thu Jun 14 01:32:04 [cleanupOldData] moveChunk deleted: 5
m30999| Thu Jun 14 01:32:11 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:32:11 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:32:11 [initandlisten] connection accepted from 127.0.0.1:51211 #11 (11 connections now open)
m30999| Thu Jun 14 01:32:11 [Balancer] connected connection!
m30999| Thu Jun 14 01:32:11 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:32:11 [Balancer] skipping balancing round because balancing is disabled
m30999| Thu Jun 14 01:32:22 [cursorTimeout] killing old cursor 2497300875679810976 idle for: 10280ms
m30999| Thu Jun 14 01:32:31 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:32:31 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339651921:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:32:32 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:32:32 [conn3] end connection 127.0.0.1:51196 (10 connections now open)
m30000| Thu Jun 14 01:32:32 [conn4] end connection 127.0.0.1:51199 (10 connections now open)
m30000| Thu Jun 14 01:32:32 [conn6] end connection 127.0.0.1:51201 (8 connections now open)
m30000| Thu Jun 14 01:32:32 [conn7] end connection 127.0.0.1:51204 (8 connections now open)
m30001| Thu Jun 14 01:32:32 [conn3] end connection 127.0.0.1:42612 (4 connections now open)
m30001| Thu Jun 14 01:32:32 [conn4] end connection 127.0.0.1:42613 (3 connections now open)
m30000| Thu Jun 14 01:32:32 [conn11] end connection 127.0.0.1:51211 (6 connections now open)
Thu Jun 14 01:32:33 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:32:33 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:32:33 [interruptThread] now exiting
m30000| Thu Jun 14 01:32:33 dbexit:
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:32:33 [interruptThread] closing listening socket: 16
m30000| Thu Jun 14 01:32:33 [interruptThread] closing listening socket: 17
m30000| Thu Jun 14 01:32:33 [interruptThread] closing listening socket: 18
m30000| Thu Jun 14 01:32:33 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:32:33 [conn5] end connection 127.0.0.1:42616 (2 connections now open)
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:32:33 [conn10] end connection 127.0.0.1:51210 (5 connections now open)
m30000| Thu Jun 14 01:32:33 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:32:33 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:32:33 dbexit: really exiting now
Thu Jun 14 01:32:34 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:32:34 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:32:34 [interruptThread] now exiting
m30001| Thu Jun 14 01:32:34 dbexit:
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:32:34 [interruptThread] closing listening socket: 19
m30001| Thu Jun 14 01:32:34 [interruptThread] closing listening socket: 20
m30001| Thu Jun 14 01:32:34 [interruptThread] closing listening socket: 21
m30001| Thu Jun 14 01:32:34 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:32:34 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:32:34 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:32:34 dbexit: really exiting now
Thu Jun 14 01:32:35 shell: stopped mongo program on port 30001
*** ShardingTest sharding_cursor1 completed successfully in 35.36 seconds ***
35410.478830ms
Thu Jun 14 01:32:35 [initandlisten] connection accepted from 127.0.0.1:54715 #21 (8 connections now open)
*******************************************
Test : diffservers1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/diffservers1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/diffservers1.js";TestData.testFile = "diffservers1.js";TestData.testName = "diffservers1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:32:35 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/diffservers10'
Thu Jun 14 01:32:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/diffservers10
m30000| Thu Jun 14 01:32:35
m30000| Thu Jun 14 01:32:35 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:32:35
m30000| Thu Jun 14 01:32:35 [initandlisten] MongoDB starting : pid=24097 port=30000 dbpath=/data/db/diffservers10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:32:35 [initandlisten]
m30000| Thu Jun 14 01:32:35 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:32:35 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:32:35 [initandlisten]
m30000| Thu Jun 14 01:32:35 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:32:35 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:32:35 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:32:35 [initandlisten]
m30000| Thu Jun 14 01:32:35 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:32:35 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:32:35 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:32:35 [initandlisten] options: { dbpath: "/data/db/diffservers10", port: 30000 }
m30000| Thu Jun 14 01:32:35 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:32:35 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/diffservers11'
m30000| Thu Jun 14 01:32:35 [initandlisten] connection accepted from 127.0.0.1:51214 #1 (1 connection now open)
Thu Jun 14 01:32:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/diffservers11
m30001| Thu Jun 14 01:32:35
m30001| Thu Jun 14 01:32:35 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:32:35
m30001| Thu Jun 14 01:32:35 [initandlisten] MongoDB starting : pid=24110 port=30001 dbpath=/data/db/diffservers11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:32:35 [initandlisten]
m30001| Thu Jun 14 01:32:35 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:32:35 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:32:35 [initandlisten]
m30001| Thu Jun 14 01:32:35 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:32:35 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:32:35 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:32:35 [initandlisten]
m30001| Thu Jun 14 01:32:35 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:32:35 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:32:35 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:32:35 [initandlisten] options: { dbpath: "/data/db/diffservers11", port: 30001 }
m30001| Thu Jun 14 01:32:35 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:32:35 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
ShardingTest diffservers1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:32:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30001| Thu Jun 14 01:32:35 [initandlisten] connection accepted from 127.0.0.1:42623 #1 (1 connection now open)
m30000| Thu Jun 14 01:32:35 [initandlisten] connection accepted from 127.0.0.1:51217 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:35 [FileAllocator] allocating new datafile /data/db/diffservers10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:35 [FileAllocator] creating directory /data/db/diffservers10/_tmp
m30999| Thu Jun 14 01:32:35 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:32:35 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24124 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:32:35 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:32:35 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:32:35 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:32:35 [initandlisten] connection accepted from 127.0.0.1:51219 #3 (3 connections now open)
m30000| Thu Jun 14 01:32:36 [FileAllocator] done allocating datafile /data/db/diffservers10/config.ns, size: 16MB, took 0.269 secs
m30000| Thu Jun 14 01:32:36 [FileAllocator] allocating new datafile /data/db/diffservers10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:32:36 [FileAllocator] done allocating datafile /data/db/diffservers10/config.0, size: 16MB, took 0.306 secs
m30000| Thu Jun 14 01:32:36 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn2] insert config.settings keyUpdates:0 locks(micros) w:586812 586ms
m30000| Thu Jun 14 01:32:36 [FileAllocator] allocating new datafile /data/db/diffservers10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:51222 #4 (4 connections now open)
m30000| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:51223 #5 (5 connections now open)
m30000| Thu Jun 14 01:32:36 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:32:36 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:51224 #6 (6 connections now open)
m30000| Thu Jun 14 01:32:36 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:32:36 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:32:36 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:32:36 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:32:36 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:32:36 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:32:36
m30999| Thu Jun 14 01:32:36 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:32:36 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:32:36 [conn4] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:32:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651956:1804289383' acquired, ts : 4fd97774b8a688297c7b9636
m30999| Thu Jun 14 01:32:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651956:1804289383' unlocked.
m30999| Thu Jun 14 01:32:36 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651956:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:32:36 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:36 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:32:36 [mongosMain] connection accepted from 127.0.0.1:43204 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:32:36 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:32:36 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:32:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:32:36 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:32:36 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:32:36 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30000| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:51227 #7 (7 connections now open)
m30999| Thu Jun 14 01:32:36 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97774b8a688297c7b9635
m30999| Thu Jun 14 01:32:36 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97774b8a688297c7b9635
m30999| Thu Jun 14 01:32:36 [conn] couldn't find database [test1] in config db
m30001| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:42633 #2 (2 connections now open)
m30001| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:42635 #3 (3 connections now open)
m30001| Thu Jun 14 01:32:36 [initandlisten] connection accepted from 127.0.0.1:42636 #4 (4 connections now open)
m30999| Thu Jun 14 01:32:36 [conn] put [test1] on: shard0001:localhost:30001
m30001| Thu Jun 14 01:32:36 [FileAllocator] allocating new datafile /data/db/diffservers11/test1.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:36 [FileAllocator] creating directory /data/db/diffservers11/_tmp
m30000| Thu Jun 14 01:32:37 [FileAllocator] done allocating datafile /data/db/diffservers10/config.1, size: 32MB, took 0.596 secs
m30001| Thu Jun 14 01:32:37 [FileAllocator] done allocating datafile /data/db/diffservers11/test1.ns, size: 16MB, took 0.415 secs
m30001| Thu Jun 14 01:32:37 [FileAllocator] allocating new datafile /data/db/diffservers11/test1.0, filling with zeroes...
m30001| Thu Jun 14 01:32:37 [FileAllocator] done allocating datafile /data/db/diffservers11/test1.0, size: 16MB, took 0.255 secs
m30001| Thu Jun 14 01:32:37 [conn3] build index test1.foo { _id: 1 }
m30001| Thu Jun 14 01:32:37 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:37 [conn3] insert test1.foo keyUpdates:0 locks(micros) W:72 w:1233383 1233ms
m30001| Thu Jun 14 01:32:37 [FileAllocator] allocating new datafile /data/db/diffservers11/test1.1, filling with zeroes...
m30000| Thu Jun 14 01:32:37 [conn4] end connection 127.0.0.1:51222 (6 connections now open)
m30000| Thu Jun 14 01:32:37 [conn3] end connection 127.0.0.1:51219 (6 connections now open)
m30000| Thu Jun 14 01:32:37 [conn6] end connection 127.0.0.1:51224 (4 connections now open)
m30000| Thu Jun 14 01:32:37 [conn7] end connection 127.0.0.1:51227 (4 connections now open)
m30001| Thu Jun 14 01:32:37 [conn3] end connection 127.0.0.1:42635 (3 connections now open)
m30001| Thu Jun 14 01:32:37 [conn4] end connection 127.0.0.1:42636 (2 connections now open)
m30999| Thu Jun 14 01:32:37 [conn] addshard request { addshard: "sdd$%" } failed: attempt to mix localhosts and IPs
m30999| Thu Jun 14 01:32:37 [conn] addshard request { addshard: "127.0.0.1:43415" } failed: couldn't connect to new shard socket exception
m30999| Thu Jun 14 01:32:37 [conn] addshard request { addshard: "10.0.0.1:43415" } failed: attempt to mix localhosts and IPs
m30999| Thu Jun 14 01:32:37 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30001| Thu Jun 14 01:32:38 [FileAllocator] done allocating datafile /data/db/diffservers11/test1.1, size: 32MB, took 0.589 secs
Thu Jun 14 01:32:38 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:32:38 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:32:38 [interruptThread] now exiting
m30000| Thu Jun 14 01:32:38 dbexit:
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:32:38 [interruptThread] closing listening socket: 17
m30000| Thu Jun 14 01:32:38 [interruptThread] closing listening socket: 18
m30000| Thu Jun 14 01:32:38 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:32:38 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:32:38 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:32:38 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:32:38 dbexit: really exiting now
Thu Jun 14 01:32:39 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:32:39 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:32:39 [interruptThread] now exiting
m30001| Thu Jun 14 01:32:39 dbexit:
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:32:39 [interruptThread] closing listening socket: 20
m30001| Thu Jun 14 01:32:39 [interruptThread] closing listening socket: 21
m30001| Thu Jun 14 01:32:39 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:32:39 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:32:39 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:32:39 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:32:39 dbexit: really exiting now
Thu Jun 14 01:32:40 shell: stopped mongo program on port 30001
*** ShardingTest diffservers1 completed successfully in 5.36 seconds ***
5405.010939ms
Thu Jun 14 01:32:40 [initandlisten] connection accepted from 127.0.0.1:54734 #22 (9 connections now open)
*******************************************
Test : drop_configdb.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/drop_configdb.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/drop_configdb.js";TestData.testFile = "drop_configdb.js";TestData.testName = "drop_configdb";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:32:40 2012
MongoDB shell version: 2.1.2-pre-
null
Thu Jun 14 01:32:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --shardsvr --port 30001 --dbpath /data/db/drop_config_shardA --nopreallocj
m30001| Thu Jun 14 01:32:40
m30001| Thu Jun 14 01:32:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:32:40
m30001| Thu Jun 14 01:32:40 [initandlisten] MongoDB starting : pid=24161 port=30001 dbpath=/data/db/drop_config_shardA 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:32:40 [initandlisten]
m30001| Thu Jun 14 01:32:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:32:40 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:32:40 [initandlisten]
m30001| Thu Jun 14 01:32:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:32:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:32:40 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:32:40 [initandlisten]
m30001| Thu Jun 14 01:32:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:32:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:32:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:32:40 [initandlisten] options: { dbpath: "/data/db/drop_config_shardA", nopreallocj: true, port: 30001, shardsvr: true }
m30001| Thu Jun 14 01:32:40 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:32:40 [websvr] admin web console waiting for connections on port 31001
m30001| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:42640 #1 (1 connection now open)
Thu Jun 14 01:32:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --shardsvr --port 30002 --dbpath /data/db/drop_config_shardB --nopreallocj
m30002| Thu Jun 14 01:32:41
m30002| Thu Jun 14 01:32:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:32:41
m30002| Thu Jun 14 01:32:41 [initandlisten] MongoDB starting : pid=24174 port=30002 dbpath=/data/db/drop_config_shardB 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:32:41 [initandlisten]
m30002| Thu Jun 14 01:32:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:32:41 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:32:41 [initandlisten]
m30002| Thu Jun 14 01:32:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:32:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:32:41 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:32:41 [initandlisten]
m30002| Thu Jun 14 01:32:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:32:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:32:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:32:41 [initandlisten] options: { dbpath: "/data/db/drop_config_shardB", nopreallocj: true, port: 30002, shardsvr: true }
m30002| Thu Jun 14 01:32:41 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:32:41 [websvr] admin web console waiting for connections on port 31002
m30002| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:45377 #1 (1 connection now open)
Thu Jun 14 01:32:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --configsvr --port 29999 --dbpath /data/db/drop_config_configC --nopreallocj
m29999| Thu Jun 14 01:32:41
m29999| Thu Jun 14 01:32:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29999| Thu Jun 14 01:32:41
m29999| Thu Jun 14 01:32:41 [initandlisten] MongoDB starting : pid=24187 port=29999 dbpath=/data/db/drop_config_configC 32-bit host=domU-12-31-39-01-70-B4
m29999| Thu Jun 14 01:32:41 [initandlisten]
m29999| Thu Jun 14 01:32:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29999| Thu Jun 14 01:32:41 [initandlisten] ** Not recommended for production.
m29999| Thu Jun 14 01:32:41 [initandlisten]
m29999| Thu Jun 14 01:32:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29999| Thu Jun 14 01:32:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29999| Thu Jun 14 01:32:41 [initandlisten] ** with --journal, the limit is lower
m29999| Thu Jun 14 01:32:41 [initandlisten]
m29999| Thu Jun 14 01:32:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29999| Thu Jun 14 01:32:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29999| Thu Jun 14 01:32:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29999| Thu Jun 14 01:32:41 [initandlisten] options: { configsvr: true, dbpath: "/data/db/drop_config_configC", nopreallocj: true, port: 29999 }
m29999| Thu Jun 14 01:32:41 [initandlisten] journal dir=/data/db/drop_config_configC/journal
m29999| Thu Jun 14 01:32:41 [initandlisten] recover : no journal files present, no recovery needed
m29999| Thu Jun 14 01:32:41 [initandlisten] waiting for connections on port 29999
m29999| Thu Jun 14 01:32:41 [websvr] admin web console waiting for connections on port 30999
Thu Jun 14 01:32:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30000 --configdb localhost:29999
m29999| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:51597 #1 (1 connection now open)
m30000| Thu Jun 14 01:32:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30000| Thu Jun 14 01:32:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24200 port=30000 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30000| Thu Jun 14 01:32:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:32:41 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:32:41 [mongosMain] options: { configdb: "localhost:29999", port: 30000 }
m29999| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:51599 #2 (2 connections now open)
m29999| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:51600 #3 (3 connections now open)
m29999| Thu Jun 14 01:32:41 [initandlisten] connection accepted from 127.0.0.1:51601 #4 (4 connections now open)
m29999| Thu Jun 14 01:32:41 [FileAllocator] allocating new datafile /data/db/drop_config_configC/config.ns, filling with zeroes...
m29999| Thu Jun 14 01:32:41 [FileAllocator] creating directory /data/db/drop_config_configC/_tmp
m29999| Thu Jun 14 01:32:41 [FileAllocator] done allocating datafile /data/db/drop_config_configC/config.ns, size: 16MB, took 0.243 secs
m29999| Thu Jun 14 01:32:41 [FileAllocator] allocating new datafile /data/db/drop_config_configC/config.0, filling with zeroes...
m29999| Thu Jun 14 01:32:42 [FileAllocator] done allocating datafile /data/db/drop_config_configC/config.0, size: 16MB, took 0.283 secs
m29999| Thu Jun 14 01:32:42 [FileAllocator] allocating new datafile /data/db/drop_config_configC/config.1, filling with zeroes...
m29999| Thu Jun 14 01:32:42 [conn4] build index config.version { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn4] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn4] insert config.version keyUpdates:0 locks(micros) w:583839 583ms
m29999| Thu Jun 14 01:32:42 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:32:42 [mongosMain] waiting for connections on port 30000
m30000| Thu Jun 14 01:32:42 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:32:42 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:32:42 [mongosMain] connection accepted from 127.0.0.1:51244 #1 (1 connection now open)
m29999| Thu Jun 14 01:32:42 [FileAllocator] done allocating datafile /data/db/drop_config_configC/config.1, size: 32MB, took 0.613 secs
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0.612 secs
m29999| Thu Jun 14 01:32:42 [conn2] insert config.settings keyUpdates:0 locks(micros) r:226 w:612960 612ms
m29999| Thu Jun 14 01:32:42 [conn2] build index config.chunks { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] info: creating collection config.chunks on add index
m29999| Thu Jun 14 01:32:42 [conn2] build index config.chunks { ns: 1, min: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:32:42 [Balancer] config servers and shards contacted successfully
m30000| Thu Jun 14 01:32:42 [Balancer] balancer id: domU-12-31-39-01-70-B4:30000 started at Jun 14 01:32:42
m30000| Thu Jun 14 01:32:42 [Balancer] created new distributed lock for balancer on localhost:29999 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:51605 #5 (5 connections now open)
m29999| Thu Jun 14 01:32:42 [conn4] build index config.mongos { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn4] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] build index config.shards { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] info: creating collection config.shards on add index
m29999| Thu Jun 14 01:32:42 [conn2] build index config.shards { host: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:42 [conn] couldn't find database [admin] in config db
m29999| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:51606 #6 (6 connections now open)
m29999| Thu Jun 14 01:32:42 [conn2] build index config.databases { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:42 [conn] put [admin] on: config:localhost:29999
m30001| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:42654 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:42 [conn] going to add shard: { _id: "shard0000", host: "localhost:30001" }
m30002| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:45390 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:42 [LockPinger] creating distributed lock ping thread for localhost:29999 and process domU-12-31-39-01-70-B4:30000:1339651962:1804289383 (sleeping for 30000ms)
m29999| Thu Jun 14 01:32:42 [conn5] build index config.locks { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn5] build index done. scanned 0 total records. 0 secs
m29999| Thu Jun 14 01:32:42 [conn2] build index config.lockpings { _id: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:42 [conn] going to add shard: { _id: "shard0001", host: "localhost:30002" }
1: Try to drop config database via configsvr
m30000| Thu Jun 14 01:32:42 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30000:1339651962:1804289383' acquired, ts : 4fd9777a60620999f8684dcf
2: Ensure it wasn't dropped
m30000| Thu Jun 14 01:32:42 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30000:1339651962:1804289383' unlocked.
m29999| Thu Jun 14 01:32:42 [conn2] build index config.lockpings { ping: 1 }
m29999| Thu Jun 14 01:32:42 [conn2] build index done. scanned 1 total records. 0 secs
1: Try to drop config database via mongos
2: Ensure it wasn't dropped
m30001| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:42656 #3 (3 connections now open)
m30000| Thu Jun 14 01:32:42 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9777a60620999f8684dce
m30000| Thu Jun 14 01:32:42 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd9777a60620999f8684dce
m30000| Thu Jun 14 01:32:42 [conn] creating WriteBackListener for: localhost:29999 serverID: 4fd9777a60620999f8684dce
m30002| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:45392 #3 (3 connections now open)
m29999| Thu Jun 14 01:32:42 [initandlisten] connection accepted from 127.0.0.1:51611 #7 (7 connections now open)
m30000| Thu Jun 14 01:32:42 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29999| Thu Jun 14 01:32:42 [conn2] end connection 127.0.0.1:51599 (6 connections now open)
m29999| Thu Jun 14 01:32:42 [conn4] end connection 127.0.0.1:51601 (5 connections now open)
m29999| Thu Jun 14 01:32:42 [conn5] end connection 127.0.0.1:51605 (4 connections now open)
m29999| Thu Jun 14 01:32:42 [conn6] end connection 127.0.0.1:51606 (3 connections now open)
m29999| Thu Jun 14 01:32:42 [conn7] end connection 127.0.0.1:51611 (2 connections now open)
m29999| Thu Jun 14 01:32:42 [conn3] end connection 127.0.0.1:51600 (1 connection now open)
m30001| Thu Jun 14 01:32:42 [conn3] end connection 127.0.0.1:42656 (2 connections now open)
m30002| Thu Jun 14 01:32:42 [conn3] end connection 127.0.0.1:45392 (2 connections now open)
Thu Jun 14 01:32:43 shell: stopped mongo program on port 30000
m29999| Thu Jun 14 01:32:43 got signal 15 (Terminated), will terminate after current cmd ends
m29999| Thu Jun 14 01:32:43 [interruptThread] now exiting
m29999| Thu Jun 14 01:32:43 dbexit:
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: going to close listening sockets...
m29999| Thu Jun 14 01:32:43 [interruptThread] closing listening socket: 25
m29999| Thu Jun 14 01:32:43 [interruptThread] closing listening socket: 26
m29999| Thu Jun 14 01:32:43 [interruptThread] closing listening socket: 27
m29999| Thu Jun 14 01:32:43 [interruptThread] removing socket file: /tmp/mongodb-29999.sock
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: going to flush diaglog...
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: going to close sockets...
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: waiting for fs preallocator...
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: lock for final commit...
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: final commit...
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: closing all files...
m29999| Thu Jun 14 01:32:43 [interruptThread] closeAllFiles() finished
m29999| Thu Jun 14 01:32:43 [interruptThread] journalCleanup...
m29999| Thu Jun 14 01:32:43 [interruptThread] removeJournalFiles
m29999| Thu Jun 14 01:32:43 [interruptThread] shutdown: removing fs lock...
m29999| Thu Jun 14 01:32:43 dbexit: really exiting now
Thu Jun 14 01:32:44 shell: stopped mongo program on port 29999
m30001| Thu Jun 14 01:32:44 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:32:44 [interruptThread] now exiting
m30001| Thu Jun 14 01:32:44 dbexit:
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:32:44 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:32:44 [interruptThread] closing listening socket: 19
m30001| Thu Jun 14 01:32:44 [interruptThread] closing listening socket: 20
m30001| Thu Jun 14 01:32:44 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:32:44 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:32:44 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:32:44 dbexit: really exiting now
Thu Jun 14 01:32:45 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:32:45 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:32:45 [interruptThread] now exiting
m30002| Thu Jun 14 01:32:45 dbexit:
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:32:45 [interruptThread] closing listening socket: 21
m30002| Thu Jun 14 01:32:45 [interruptThread] closing listening socket: 22
m30002| Thu Jun 14 01:32:45 [interruptThread] closing listening socket: 23
m30002| Thu Jun 14 01:32:45 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:32:45 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:32:45 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:32:45 dbexit: really exiting now
Thu Jun 14 01:32:46 shell: stopped mongo program on port 30002
5948.890209ms
Thu Jun 14 01:32:46 [initandlisten] connection accepted from 127.0.0.1:54755 #23 (10 connections now open)
*******************************************
Test : drop_sharded_db.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/drop_sharded_db.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/drop_sharded_db.js";TestData.testFile = "drop_sharded_db.js";TestData.testName = "drop_sharded_db";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:32:46 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/drop_sharded_db0'
Thu Jun 14 01:32:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/drop_sharded_db0
m30000| Thu Jun 14 01:32:46
m30000| Thu Jun 14 01:32:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:32:46
m30000| Thu Jun 14 01:32:46 [initandlisten] MongoDB starting : pid=24242 port=30000 dbpath=/data/db/drop_sharded_db0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:32:46 [initandlisten]
m30000| Thu Jun 14 01:32:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:32:46 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:32:46 [initandlisten]
m30000| Thu Jun 14 01:32:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:32:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:32:46 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:32:46 [initandlisten]
m30000| Thu Jun 14 01:32:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:32:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:32:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:32:46 [initandlisten] options: { dbpath: "/data/db/drop_sharded_db0", port: 30000 }
m30000| Thu Jun 14 01:32:46 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:32:46 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/drop_sharded_db1'
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51254 #1 (1 connection now open)
Thu Jun 14 01:32:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/drop_sharded_db1
m30001| Thu Jun 14 01:32:47
m30001| Thu Jun 14 01:32:47 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:32:47
m30001| Thu Jun 14 01:32:47 [initandlisten] MongoDB starting : pid=24255 port=30001 dbpath=/data/db/drop_sharded_db1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:32:47 [initandlisten]
m30001| Thu Jun 14 01:32:47 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:32:47 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:32:47 [initandlisten]
m30001| Thu Jun 14 01:32:47 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:32:47 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:32:47 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:32:47 [initandlisten]
m30001| Thu Jun 14 01:32:47 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:32:47 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:32:47 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:32:47 [initandlisten] options: { dbpath: "/data/db/drop_sharded_db1", port: 30001 }
m30001| Thu Jun 14 01:32:47 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:32:47 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
ShardingTest drop_sharded_db :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:32:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51257 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:47 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:47 [FileAllocator] creating directory /data/db/drop_sharded_db0/_tmp
m30999| Thu Jun 14 01:32:47 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:32:47 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24269 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:32:47 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:32:47 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:32:47 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30001| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:42663 #1 (1 connection now open)
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51258 #3 (3 connections now open)
m30000| Thu Jun 14 01:32:47 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/config.ns, size: 16MB, took 0.23 secs
m30000| Thu Jun 14 01:32:47 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:32:47 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/config.0, size: 16MB, took 0.273 secs
m30999| Thu Jun 14 01:32:47 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:32:47 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:32:47 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:32:47 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:32:47 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:32:47
m30999| Thu Jun 14 01:32:47 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:32:47 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339651967:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:32:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd9777f2cb587101f2e7a24
m30999| Thu Jun 14 01:32:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30000| Thu Jun 14 01:32:47 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:32:47 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn2] insert config.settings keyUpdates:0 locks(micros) w:520732 520ms
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51262 #4 (4 connections now open)
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51263 #5 (5 connections now open)
m30000| Thu Jun 14 01:32:47 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:32:47 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:32:47 [conn4] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51264 #6 (6 connections now open)
m30000| Thu Jun 14 01:32:47 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:47 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:32:47 [mongosMain] connection accepted from 127.0.0.1:43244 #1 (1 connection now open)
m30999| Thu Jun 14 01:32:47 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:32:47 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:32:47 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:32:47 [conn] put [admin] on: config:localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:32:47 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30999| Thu Jun 14 01:32:47 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:32:47 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd9777f2cb587101f2e7a23
m30001| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:42673 #2 (2 connections now open)
m30000| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:51267 #7 (7 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
m30001| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:42675 #3 (3 connections now open)
m30999| Thu Jun 14 01:32:47 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd9777f2cb587101f2e7a23
Waiting again for active hosts after balancer is off...
1: insert some data and colls into all dbs
m30999| Thu Jun 14 01:32:47 [conn] couldn't find database [buy] in config db
m30001| Thu Jun 14 01:32:47 [initandlisten] connection accepted from 127.0.0.1:42676 #4 (4 connections now open)
m30999| Thu Jun 14 01:32:47 [conn] put [buy] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:32:47 [conn] couldn't find database [buy_201107] in config db
m30001| Thu Jun 14 01:32:47 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:47 [FileAllocator] creating directory /data/db/drop_sharded_db1/_tmp
2: shard the colls
m30000| Thu Jun 14 01:32:48 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/config.1, size: 32MB, took 0.667 secs
m30001| Thu Jun 14 01:32:48 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy.ns, size: 16MB, took 0.441 secs
m30001| Thu Jun 14 01:32:48 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy.0, filling with zeroes...
m30001| Thu Jun 14 01:32:49 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy.0, size: 16MB, took 0.257 secs
m30001| Thu Jun 14 01:32:49 [conn3] build index buy.data0 { _id: 1 }
m30001| Thu Jun 14 01:32:49 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:49 [conn3] insert buy.data0 keyUpdates:0 locks(micros) W:53 w:1206423 1206ms
m30001| Thu Jun 14 01:32:49 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:32 reslen:1685 1205ms
m30001| Thu Jun 14 01:32:49 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy.1, filling with zeroes...
m30999| Thu Jun 14 01:32:49 [conn] put [buy_201107] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:32:49 [conn] couldn't find database [buy_201108] in config db
m30001| Thu Jun 14 01:32:49 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy.1, size: 32MB, took 0.565 secs
m30001| Thu Jun 14 01:32:49 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201107.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:49 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201107.ns, size: 16MB, took 0.255 secs
m30001| Thu Jun 14 01:32:49 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201107.0, filling with zeroes...
m30001| Thu Jun 14 01:32:50 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201107.0, size: 16MB, took 0.3 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data0 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] insert buy_201107.data0 keyUpdates:0 locks(micros) W:53 w:2338354 1131ms
m30001| Thu Jun 14 01:32:50 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:66 reslen:1856 1130ms
m30999| Thu Jun 14 01:32:50 [conn] put [buy_201108] on: shard0000:localhost:30000
m30000| Thu Jun 14 01:32:50 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201108.ns, filling with zeroes...
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data1 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data1 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data2 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data2 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data3 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data3 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data4 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data4 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data5 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data5 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data6 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data6 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data7 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data7 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data8 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data8 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy.data9 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [conn3] build index buy_201107.data9 { _id: 1 }
m30001| Thu Jun 14 01:32:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:50 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201107.1, filling with zeroes...
m30000| Thu Jun 14 01:32:50 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201108.ns, size: 16MB, took 0.316 secs
m30000| Thu Jun 14 01:32:50 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201108.0, filling with zeroes...
m30001| Thu Jun 14 01:32:51 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201107.1, size: 32MB, took 0.843 secs
m30000| Thu Jun 14 01:32:51 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201108.0, size: 16MB, took 0.929 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data0 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] insert buy_201108.data0 keyUpdates:0 locks(micros) W:62 r:46 w:1256424 1256ms
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data1 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data2 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data3 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data4 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data5 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data6 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data7 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data8 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [conn7] build index buy_201108.data9 { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201108.1, filling with zeroes...
m30999| Thu Jun 14 01:32:51 [conn] enabling sharding on: buy
m30999| Thu Jun 14 01:32:51 [conn] CMD: shardcollection: { shardcollection: "buy.data0", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:51 [conn] enable sharding on: buy.data0 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:51 [conn] going to create 1 chunk(s) for: buy.data0 using new epoch 4fd977832cb587101f2e7a25
m30999| Thu Jun 14 01:32:51 [conn] ChunkManager: time to load chunks for buy.data0: 0ms sequenceNumber: 2 version: 1|0||4fd977832cb587101f2e7a25 based on: (empty)
m30000| Thu Jun 14 01:32:51 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:32:51 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:51 [initandlisten] connection accepted from 127.0.0.1:51270 #8 (8 connections now open)
m30001| Thu Jun 14 01:32:51 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:51 [conn] splitting: buy.data0 shard: ns:buy.data0 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:32:51 [initandlisten] connection accepted from 127.0.0.1:51271 #9 (9 connections now open)
m30001| Thu Jun 14 01:32:51 [conn4] received splitChunk request: { splitChunk: "buy.data0", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data0-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:51 [conn4] created new distributed lock for buy.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:51 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339651971:1856192163 (sleeping for 30000ms)
m30001| Thu Jun 14 01:32:51 [conn4] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977835a33af45ea50d92a
m30000| Thu Jun 14 01:32:51 [initandlisten] connection accepted from 127.0.0.1:51272 #10 (10 connections now open)
m30001| Thu Jun 14 01:32:51 [conn4] splitChunk accepted at version 1|0||4fd977832cb587101f2e7a25
m30001| Thu Jun 14 01:32:51 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:51-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651971640), what: "split", ns: "buy.data0", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977832cb587101f2e7a25') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977832cb587101f2e7a25') } } }
m30001| Thu Jun 14 01:32:51 [conn4] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:32:51 [conn] ChunkManager: time to load chunks for buy.data0: 0ms sequenceNumber: 3 version: 1|2||4fd977832cb587101f2e7a25 based on: 1|0||4fd977832cb587101f2e7a25
m30999| Thu Jun 14 01:32:51 [conn] CMD: movechunk: { movechunk: "buy.data0", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:51 [conn] moving chunk ns: buy.data0 moving ( ns:buy.data0 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:51 [conn4] received moveChunk request: { moveChunk: "buy.data0", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data0-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:51 [conn4] created new distributed lock for buy.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:51 [conn4] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977835a33af45ea50d92b
m30001| Thu Jun 14 01:32:51 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:51-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651971644), what: "moveChunk.start", ns: "buy.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:51 [conn4] moveChunk request accepted at version 1|2||4fd977832cb587101f2e7a25
m30001| Thu Jun 14 01:32:51 [conn4] moveChunk number of documents: 299
m30001| Thu Jun 14 01:32:51 [initandlisten] connection accepted from 127.0.0.1:42680 #5 (5 connections now open)
m30000| Thu Jun 14 01:32:52 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201108.1, size: 32MB, took 0.599 secs
m30000| Thu Jun 14 01:32:52 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:52 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy.ns, size: 16MB, took 0.298 secs
m30000| Thu Jun 14 01:32:52 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy.0, filling with zeroes...
m30001| Thu Jun 14 01:32:52 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data0", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:32:52 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy.0, size: 16MB, took 0.387 secs
m30000| Thu Jun 14 01:32:52 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy.1, filling with zeroes...
m30000| Thu Jun 14 01:32:52 [migrateThread] build index buy.data0 { _id: 1 }
m30000| Thu Jun 14 01:32:52 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:52 [migrateThread] info: creating collection buy.data0 on add index
m30000| Thu Jun 14 01:32:52 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data0' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:53 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy.1, size: 32MB, took 0.662 secs
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data0", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk setting version to: 2|0||4fd977832cb587101f2e7a25
m30000| Thu Jun 14 01:32:53 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data0' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:53 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:53-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651973659), what: "moveChunk.to", ns: "buy.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 1279, step2 of 5: 0, step3 of 5: 13, step4 of 5: 0, step5 of 5: 721 } }
m30000| Thu Jun 14 01:32:53 [initandlisten] connection accepted from 127.0.0.1:51274 #11 (11 connections now open)
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data0", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk updating self version to: 2|1||4fd977832cb587101f2e7a25 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data0'
m30001| Thu Jun 14 01:32:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:53-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651973664), what: "moveChunk.commit", ns: "buy.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:53 [conn4] doing delete inline
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk deleted: 299
m30001| Thu Jun 14 01:32:53 [conn4] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:53-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651973673), what: "moveChunk.from", ns: "buy.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 12, step6 of 6: 8 } }
m30001| Thu Jun 14 01:32:53 [conn4] command admin.$cmd command: { moveChunk: "buy.data0", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data0-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:917 w:7249 reslen:37 2029ms
{ "millis" : 2030, "ok" : 1 }
m30001| Thu Jun 14 01:32:53 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:53 [conn] ChunkManager: time to load chunks for buy.data0: 0ms sequenceNumber: 4 version: 2|1||4fd977832cb587101f2e7a25 based on: 1|2||4fd977832cb587101f2e7a25
m30999| Thu Jun 14 01:32:53 [conn] enabling sharding on: buy_201107
m30999| Thu Jun 14 01:32:53 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data0", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:53 [conn] enable sharding on: buy_201107.data0 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:53 [conn] going to create 1 chunk(s) for: buy_201107.data0 using new epoch 4fd977852cb587101f2e7a26
m30999| Thu Jun 14 01:32:53 [conn] ChunkManager: time to load chunks for buy_201107.data0: 0ms sequenceNumber: 5 version: 1|0||4fd977852cb587101f2e7a26 based on: (empty)
m30001| Thu Jun 14 01:32:53 [conn4] received splitChunk request: { splitChunk: "buy_201107.data0", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data0-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:53 [conn4] created new distributed lock for buy_201107.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:53 [conn4] distributed lock 'buy_201107.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977855a33af45ea50d92c
m30999| Thu Jun 14 01:32:53 [conn] splitting: buy_201107.data0 shard: ns:buy_201107.data0 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:32:53 [conn4] splitChunk accepted at version 1|0||4fd977852cb587101f2e7a26
m30001| Thu Jun 14 01:32:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:53-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651973680), what: "split", ns: "buy_201107.data0", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977852cb587101f2e7a26') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977852cb587101f2e7a26') } } }
m30001| Thu Jun 14 01:32:53 [conn4] distributed lock 'buy_201107.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:32:53 [conn] ChunkManager: time to load chunks for buy_201107.data0: 0ms sequenceNumber: 6 version: 1|2||4fd977852cb587101f2e7a26 based on: 1|0||4fd977852cb587101f2e7a26
m30999| Thu Jun 14 01:32:53 [conn] CMD: movechunk: { movechunk: "buy_201107.data0", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:53 [conn] moving chunk ns: buy_201107.data0 moving ( ns:buy_201107.data0 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:53 [conn4] received moveChunk request: { moveChunk: "buy_201107.data0", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data0-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:53 [conn4] created new distributed lock for buy_201107.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:53 [conn4] distributed lock 'buy_201107.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977855a33af45ea50d92d
m30001| Thu Jun 14 01:32:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:53-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651973684), what: "moveChunk.start", ns: "buy_201107.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk request accepted at version 1|2||4fd977852cb587101f2e7a26
m30001| Thu Jun 14 01:32:53 [conn4] moveChunk number of documents: 299
m30000| Thu Jun 14 01:32:53 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201107.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:53 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201107.ns, size: 16MB, took 0.283 secs
m30000| Thu Jun 14 01:32:53 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201107.0, filling with zeroes...
m30000| Thu Jun 14 01:32:54 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201107.0, size: 16MB, took 0.327 secs
m30000| Thu Jun 14 01:32:54 [FileAllocator] allocating new datafile /data/db/drop_sharded_db0/buy_201107.1, filling with zeroes...
m30000| Thu Jun 14 01:32:54 [migrateThread] build index buy_201107.data0 { _id: 1 }
m30000| Thu Jun 14 01:32:54 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:54 [migrateThread] info: creating collection buy_201107.data0 on add index
m30000| Thu Jun 14 01:32:54 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data0' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:32:54 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data0", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:54 [conn4] moveChunk setting version to: 2|0||4fd977852cb587101f2e7a26
m30000| Thu Jun 14 01:32:54 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data0' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:54 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:54-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651974691), what: "moveChunk.to", ns: "buy_201107.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 620, step2 of 5: 0, step3 of 5: 12, step4 of 5: 0, step5 of 5: 373 } }
m30001| Thu Jun 14 01:32:54 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data0", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:54 [conn4] moveChunk updating self version to: 2|1||4fd977852cb587101f2e7a26 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data0'
m30001| Thu Jun 14 01:32:54 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:54-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651974696), what: "moveChunk.commit", ns: "buy_201107.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:54 [conn4] doing delete inline
m30001| Thu Jun 14 01:32:54 [conn4] moveChunk deleted: 299
m30001| Thu Jun 14 01:32:54 [conn4] distributed lock 'buy_201107.data0/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:54 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:54-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651974705), what: "moveChunk.from", ns: "buy_201107.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 9 } }
m30001| Thu Jun 14 01:32:54 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data0", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data0-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1582 w:15012 reslen:37 1022ms
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:32:54 [conn] ChunkManager: time to load chunks for buy_201107.data0: 0ms sequenceNumber: 7 version: 2|1||4fd977852cb587101f2e7a26 based on: 1|2||4fd977852cb587101f2e7a26
m30999| Thu Jun 14 01:32:54 [conn] enabling sharding on: buy_201108
m30999| Thu Jun 14 01:32:54 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data0", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:54 [conn] enable sharding on: buy_201108.data0 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:54 [conn] going to create 1 chunk(s) for: buy_201108.data0 using new epoch 4fd977862cb587101f2e7a27
m30000| Thu Jun 14 01:32:54 [initandlisten] connection accepted from 127.0.0.1:51275 #12 (12 connections now open)
m30999| Thu Jun 14 01:32:54 [conn] ChunkManager: time to load chunks for buy_201108.data0: 0ms sequenceNumber: 8 version: 1|0||4fd977862cb587101f2e7a27 based on: (empty)
m30000| Thu Jun 14 01:32:54 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:54 [conn] splitting: buy_201108.data0 shard: ns:buy_201108.data0 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:32:54 [initandlisten] connection accepted from 127.0.0.1:51276 #13 (13 connections now open)
m30000| Thu Jun 14 01:32:54 [conn6] received splitChunk request: { splitChunk: "buy_201108.data0", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data0-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:32:54 [conn6] created new distributed lock for buy_201108.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:32:54 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339651974:685582305 (sleeping for 30000ms)
m30000| Thu Jun 14 01:32:54 [conn6] distributed lock 'buy_201108.data0/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97786488b0031418cbefe
m30000| Thu Jun 14 01:32:54 [conn6] splitChunk accepted at version 1|0||4fd977862cb587101f2e7a27
m30000| Thu Jun 14 01:32:54 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:54-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651974715), what: "split", ns: "buy_201108.data0", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977862cb587101f2e7a27') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977862cb587101f2e7a27') } } }
m30000| Thu Jun 14 01:32:54 [conn6] distributed lock 'buy_201108.data0/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:32:54 [conn] ChunkManager: time to load chunks for buy_201108.data0: 0ms sequenceNumber: 9 version: 1|2||4fd977862cb587101f2e7a27 based on: 1|0||4fd977862cb587101f2e7a27
m30999| Thu Jun 14 01:32:54 [conn] CMD: movechunk: { movechunk: "buy_201108.data0", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:32:54 [conn] moving chunk ns: buy_201108.data0 moving ( ns:buy_201108.data0 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:32:54 [conn6] received moveChunk request: { moveChunk: "buy_201108.data0", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data0-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:32:54 [conn6] created new distributed lock for buy_201108.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:32:54 [conn6] distributed lock 'buy_201108.data0/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97786488b0031418cbeff
m30000| Thu Jun 14 01:32:54 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:54-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651974722), what: "moveChunk.start", ns: "buy_201108.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:32:54 [conn6] moveChunk request accepted at version 1|2||4fd977862cb587101f2e7a27
m30000| Thu Jun 14 01:32:54 [conn6] moveChunk number of documents: 299
m30001| Thu Jun 14 01:32:54 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201108.ns, filling with zeroes...
m30000| Thu Jun 14 01:32:55 [FileAllocator] done allocating datafile /data/db/drop_sharded_db0/buy_201107.1, size: 32MB, took 0.737 secs
m30001| Thu Jun 14 01:32:55 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201108.ns, size: 16MB, took 0.499 secs
m30001| Thu Jun 14 01:32:55 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201108.0, filling with zeroes...
m30001| Thu Jun 14 01:32:55 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201108.0, size: 16MB, took 0.267 secs
m30001| Thu Jun 14 01:32:55 [FileAllocator] allocating new datafile /data/db/drop_sharded_db1/buy_201108.1, filling with zeroes...
m30001| Thu Jun 14 01:32:55 [migrateThread] build index buy_201108.data0 { _id: 1 }
m30001| Thu Jun 14 01:32:55 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:55 [migrateThread] info: creating collection buy_201108.data0 on add index
m30001| Thu Jun 14 01:32:55 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data0' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:55 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data0", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:32:55 [conn6] moveChunk setting version to: 2|0||4fd977862cb587101f2e7a27
m30001| Thu Jun 14 01:32:55 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data0' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:32:55 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:55-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651975731), what: "moveChunk.to", ns: "buy_201108.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 937, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 58 } }
m30000| Thu Jun 14 01:32:55 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data0", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 299, clonedBytes: 5382, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:32:55 [conn6] moveChunk updating self version to: 2|1||4fd977862cb587101f2e7a27 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data0'
m30000| Thu Jun 14 01:32:55 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:55-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651975736), what: "moveChunk.commit", ns: "buy_201108.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:32:55 [conn6] doing delete inline
m30000| Thu Jun 14 01:32:55 [conn6] moveChunk deleted: 299
{ "millis" : 1026, "ok" : 1 }
m30000| Thu Jun 14 01:32:55 [conn6] distributed lock 'buy_201108.data0/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:32:55 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:55-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651975746), what: "moveChunk.from", ns: "buy_201108.data0", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 9 } }
m30000| Thu Jun 14 01:32:55 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data0", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data0-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3509 w:8712 reslen:37 1024ms
m30001| Thu Jun 14 01:32:55 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:32:55 [conn4] received splitChunk request: { splitChunk: "buy.data1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data1-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:55 [conn4] created new distributed lock for buy.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:55 [conn4] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977875a33af45ea50d92e
m30001| Thu Jun 14 01:32:55 [conn4] splitChunk accepted at version 1|0||4fd977872cb587101f2e7a28
m30001| Thu Jun 14 01:32:55 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:55-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651975753), what: "split", ns: "buy.data1", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977872cb587101f2e7a28') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977872cb587101f2e7a28') } } }
m30001| Thu Jun 14 01:32:55 [conn4] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:55 [conn4] received moveChunk request: { moveChunk: "buy.data1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data1-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:55 [conn4] created new distributed lock for buy.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:55 [conn4] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977875a33af45ea50d92f
m30001| Thu Jun 14 01:32:55 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:55-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651975756), what: "moveChunk.start", ns: "buy.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:55 [conn4] moveChunk request accepted at version 1|2||4fd977872cb587101f2e7a28
m30001| Thu Jun 14 01:32:55 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:32:55 [migrateThread] build index buy.data1 { _id: 1 }
m30000| Thu Jun 14 01:32:55 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:55 [migrateThread] info: creating collection buy.data1 on add index
m30000| Thu Jun 14 01:32:55 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data1' { _id: 1.0 } -> { _id: MaxKey }
m30999| Thu Jun 14 01:32:55 [conn] ChunkManager: time to load chunks for buy_201108.data0: 0ms sequenceNumber: 10 version: 2|1||4fd977862cb587101f2e7a27 based on: 1|2||4fd977862cb587101f2e7a27
m30999| Thu Jun 14 01:32:55 [conn] CMD: shardcollection: { shardcollection: "buy.data1", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:55 [conn] enable sharding on: buy.data1 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:55 [conn] going to create 1 chunk(s) for: buy.data1 using new epoch 4fd977872cb587101f2e7a28
m30999| Thu Jun 14 01:32:55 [conn] ChunkManager: time to load chunks for buy.data1: 0ms sequenceNumber: 11 version: 1|0||4fd977872cb587101f2e7a28 based on: (empty)
m30999| Thu Jun 14 01:32:55 [conn] splitting: buy.data1 shard: ns:buy.data1 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30999| Thu Jun 14 01:32:55 [conn] ChunkManager: time to load chunks for buy.data1: 0ms sequenceNumber: 12 version: 1|2||4fd977872cb587101f2e7a28 based on: 1|0||4fd977872cb587101f2e7a28
m30999| Thu Jun 14 01:32:55 [conn] CMD: movechunk: { movechunk: "buy.data1", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:55 [conn] moving chunk ns: buy.data1 moving ( ns:buy.data1 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:56 [FileAllocator] done allocating datafile /data/db/drop_sharded_db1/buy_201108.1, size: 32MB, took 0.589 secs
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data1", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk setting version to: 2|0||4fd977872cb587101f2e7a28
m30000| Thu Jun 14 01:32:56 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data1' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:56 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:56-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651976771), what: "moveChunk.to", ns: "buy.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 1002 } }
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data1", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk updating self version to: 2|1||4fd977872cb587101f2e7a28 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data1'
m30001| Thu Jun 14 01:32:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:56-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651976776), what: "moveChunk.commit", ns: "buy.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:56 [conn4] doing delete inline
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:32:56 [conn4] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:56-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651976784), what: "moveChunk.from", ns: "buy.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 16, step6 of 6: 8 } }
m30001| Thu Jun 14 01:32:56 [conn4] command admin.$cmd command: { moveChunk: "buy.data1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data1-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2212 w:21999 reslen:37 1029ms
m30999| Thu Jun 14 01:32:56 [conn] ChunkManager: time to load chunks for buy.data1: 0ms sequenceNumber: 13 version: 2|1||4fd977872cb587101f2e7a28 based on: 1|2||4fd977872cb587101f2e7a28
{ "millis" : 1030, "ok" : 1 }
m30999| Thu Jun 14 01:32:56 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data1", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:56 [conn] enable sharding on: buy_201107.data1 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:56 [conn] going to create 1 chunk(s) for: buy_201107.data1 using new epoch 4fd977882cb587101f2e7a29
m30999| Thu Jun 14 01:32:56 [conn] ChunkManager: time to load chunks for buy_201107.data1: 0ms sequenceNumber: 14 version: 1|0||4fd977882cb587101f2e7a29 based on: (empty)
m30001| Thu Jun 14 01:32:56 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:56 [conn] splitting: buy_201107.data1 shard: ns:buy_201107.data1 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:32:56 [conn4] received splitChunk request: { splitChunk: "buy_201107.data1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data1-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:56 [conn4] created new distributed lock for buy_201107.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:56 [conn4] distributed lock 'buy_201107.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977885a33af45ea50d930
m30001| Thu Jun 14 01:32:56 [conn4] splitChunk accepted at version 1|0||4fd977882cb587101f2e7a29
m30001| Thu Jun 14 01:32:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:56-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651976791), what: "split", ns: "buy_201107.data1", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977882cb587101f2e7a29') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977882cb587101f2e7a29') } } }
m30001| Thu Jun 14 01:32:56 [conn4] distributed lock 'buy_201107.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:32:56 [conn] ChunkManager: time to load chunks for buy_201107.data1: 0ms sequenceNumber: 15 version: 1|2||4fd977882cb587101f2e7a29 based on: 1|0||4fd977882cb587101f2e7a29
m30999| Thu Jun 14 01:32:56 [conn] CMD: movechunk: { movechunk: "buy_201107.data1", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:56 [conn] moving chunk ns: buy_201107.data1 moving ( ns:buy_201107.data1 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:56 [conn4] received moveChunk request: { moveChunk: "buy_201107.data1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data1-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:56 [conn4] created new distributed lock for buy_201107.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:56 [conn4] distributed lock 'buy_201107.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977885a33af45ea50d931
m30001| Thu Jun 14 01:32:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:56-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651976794), what: "moveChunk.start", ns: "buy_201107.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk request accepted at version 1|2||4fd977882cb587101f2e7a29
m30001| Thu Jun 14 01:32:56 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:32:56 [migrateThread] build index buy_201107.data1 { _id: 1 }
m30000| Thu Jun 14 01:32:56 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:56 [migrateThread] info: creating collection buy_201107.data1 on add index
m30000| Thu Jun 14 01:32:56 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data1' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:32:57 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data1", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:57 [conn4] moveChunk setting version to: 2|0||4fd977882cb587101f2e7a29
m30000| Thu Jun 14 01:32:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data1' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:57 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:57-7", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651977803), what: "moveChunk.to", ns: "buy_201107.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:32:57 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data1", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:57 [conn4] moveChunk updating self version to: 2|1||4fd977882cb587101f2e7a29 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data1'
m30001| Thu Jun 14 01:32:57 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:57-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651977808), what: "moveChunk.commit", ns: "buy_201107.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:57 [conn4] doing delete inline
m30001| Thu Jun 14 01:32:57 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:32:57 [conn4] distributed lock 'buy_201107.data1/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:57 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:57-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651977817), what: "moveChunk.from", ns: "buy_201107.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:32:57 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data1-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2852 w:29425 reslen:37 1023ms
m30999| Thu Jun 14 01:32:57 [conn] ChunkManager: time to load chunks for buy_201107.data1: 0ms sequenceNumber: 16 version: 2|1||4fd977882cb587101f2e7a29 based on: 1|2||4fd977882cb587101f2e7a29
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:32:57 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data1", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:57 [conn] enable sharding on: buy_201108.data1 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:57 [conn] going to create 1 chunk(s) for: buy_201108.data1 using new epoch 4fd977892cb587101f2e7a2a
m30999| Thu Jun 14 01:32:57 [conn] ChunkManager: time to load chunks for buy_201108.data1: 0ms sequenceNumber: 17 version: 1|0||4fd977892cb587101f2e7a2a based on: (empty)
m30999| Thu Jun 14 01:32:57 [conn] splitting: buy_201108.data1 shard: ns:buy_201108.data1 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30999| Thu Jun 14 01:32:57 [conn] ChunkManager: time to load chunks for buy_201108.data1: 0ms sequenceNumber: 18 version: 1|2||4fd977892cb587101f2e7a2a based on: 1|0||4fd977892cb587101f2e7a2a
m30999| Thu Jun 14 01:32:57 [conn] CMD: movechunk: { movechunk: "buy_201108.data1", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:32:57 [conn] moving chunk ns: buy_201108.data1 moving ( ns:buy_201108.data1 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30001| Thu Jun 14 01:32:57 [migrateThread] build index buy_201108.data1 { _id: 1 }
m30000| Thu Jun 14 01:32:57 [conn7] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:32:57 [conn6] received splitChunk request: { splitChunk: "buy_201108.data1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data1-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:32:57 [conn6] created new distributed lock for buy_201108.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:32:57 [conn6] distributed lock 'buy_201108.data1/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97789488b0031418cbf00
m30000| Thu Jun 14 01:32:57 [conn6] splitChunk accepted at version 1|0||4fd977892cb587101f2e7a2a
m30000| Thu Jun 14 01:32:57 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:57-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651977823), what: "split", ns: "buy_201108.data1", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977892cb587101f2e7a2a') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977892cb587101f2e7a2a') } } }
m30000| Thu Jun 14 01:32:57 [conn6] distributed lock 'buy_201108.data1/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:32:57 [conn6] received moveChunk request: { moveChunk: "buy_201108.data1", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data1-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:32:57 [conn6] created new distributed lock for buy_201108.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:32:57 [conn6] distributed lock 'buy_201108.data1/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97789488b0031418cbf01
m30000| Thu Jun 14 01:32:57 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:57-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651977826), what: "moveChunk.start", ns: "buy_201108.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:32:57 [conn6] moveChunk request accepted at version 1|2||4fd977892cb587101f2e7a2a
m30000| Thu Jun 14 01:32:57 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:32:57 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:32:57 [migrateThread] info: creating collection buy_201108.data1 on add index
m30001| Thu Jun 14 01:32:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data1' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:57 [initandlisten] connection accepted from 127.0.0.1:51277 #14 (14 connections now open)
m30000| Thu Jun 14 01:32:58 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data1", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:32:58 [conn6] moveChunk setting version to: 2|0||4fd977892cb587101f2e7a2a
m30001| Thu Jun 14 01:32:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data1' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:32:58 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:58-17", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651978835), what: "moveChunk.to", ns: "buy_201108.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 996 } }
m30000| Thu Jun 14 01:32:58 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data1", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:32:58 [conn6] moveChunk updating self version to: 2|1||4fd977892cb587101f2e7a2a through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data1'
m30000| Thu Jun 14 01:32:58 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:58-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651978840), what: "moveChunk.commit", ns: "buy_201108.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:32:58 [conn6] doing delete inline
m30000| Thu Jun 14 01:32:58 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:32:58 [conn6] distributed lock 'buy_201108.data1/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:32:58 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:58-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651978849), what: "moveChunk.from", ns: "buy_201108.data1", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:32:58 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data1", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data1-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:6226 w:16243 reslen:37 1024ms
m30999| Thu Jun 14 01:32:58 [conn] ChunkManager: time to load chunks for buy_201108.data1: 0ms sequenceNumber: 19 version: 2|1||4fd977892cb587101f2e7a2a based on: 1|2||4fd977892cb587101f2e7a2a
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:32:58 [conn] CMD: shardcollection: { shardcollection: "buy.data2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:58 [conn] enable sharding on: buy.data2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:58 [conn] going to create 1 chunk(s) for: buy.data2 using new epoch 4fd9778a2cb587101f2e7a2b
m30999| Thu Jun 14 01:32:58 [conn] ChunkManager: time to load chunks for buy.data2: 0ms sequenceNumber: 20 version: 1|0||4fd9778a2cb587101f2e7a2b based on: (empty)
m30001| Thu Jun 14 01:32:58 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:58 [conn] splitting: buy.data2 shard: ns:buy.data2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:32:58 [conn4] received splitChunk request: { splitChunk: "buy.data2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data2-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:58 [conn4] created new distributed lock for buy.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:58 [conn4] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778a5a33af45ea50d932
m30001| Thu Jun 14 01:32:58 [conn4] splitChunk accepted at version 1|0||4fd9778a2cb587101f2e7a2b
m30001| Thu Jun 14 01:32:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:58-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651978856), what: "split", ns: "buy.data2", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9778a2cb587101f2e7a2b') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9778a2cb587101f2e7a2b') } } }
m30001| Thu Jun 14 01:32:58 [conn4] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:32:58 [conn] ChunkManager: time to load chunks for buy.data2: 0ms sequenceNumber: 21 version: 1|2||4fd9778a2cb587101f2e7a2b based on: 1|0||4fd9778a2cb587101f2e7a2b
m30999| Thu Jun 14 01:32:58 [conn] CMD: movechunk: { movechunk: "buy.data2", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:58 [conn] moving chunk ns: buy.data2 moving ( ns:buy.data2 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:58 [conn4] received moveChunk request: { moveChunk: "buy.data2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data2-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:58 [conn4] created new distributed lock for buy.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:58 [conn4] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778a5a33af45ea50d933
m30001| Thu Jun 14 01:32:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:58-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651978859), what: "moveChunk.start", ns: "buy.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:58 [conn4] moveChunk request accepted at version 1|2||4fd9778a2cb587101f2e7a2b
m30001| Thu Jun 14 01:32:58 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:32:58 [migrateThread] build index buy.data2 { _id: 1 }
m30000| Thu Jun 14 01:32:58 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:58 [migrateThread] info: creating collection buy.data2 on add index
m30000| Thu Jun 14 01:32:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data2' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data2", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk setting version to: 2|0||4fd9778a2cb587101f2e7a2b
m30000| Thu Jun 14 01:32:59 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data2' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:32:59 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:59-12", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651979867), what: "moveChunk.to", ns: "buy.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data2", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk updating self version to: 2|1||4fd9778a2cb587101f2e7a2b through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data2'
m30001| Thu Jun 14 01:32:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:59-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651979872), what: "moveChunk.commit", ns: "buy.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:59 [conn4] doing delete inline
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:32:59 [conn4] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:32:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:59-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651979881), what: "moveChunk.from", ns: "buy.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:32:59 [conn4] command admin.$cmd command: { moveChunk: "buy.data2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data2-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3489 w:36414 reslen:37 1022ms
m30999| Thu Jun 14 01:32:59 [conn] ChunkManager: time to load chunks for buy.data2: 0ms sequenceNumber: 22 version: 2|1||4fd9778a2cb587101f2e7a2b based on: 1|2||4fd9778a2cb587101f2e7a2b
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:32:59 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:32:59 [conn] enable sharding on: buy_201107.data2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:32:59 [conn] going to create 1 chunk(s) for: buy_201107.data2 using new epoch 4fd9778b2cb587101f2e7a2c
m30999| Thu Jun 14 01:32:59 [conn] ChunkManager: time to load chunks for buy_201107.data2: 0ms sequenceNumber: 23 version: 1|0||4fd9778b2cb587101f2e7a2c based on: (empty)
m30001| Thu Jun 14 01:32:59 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:32:59 [conn] splitting: buy_201107.data2 shard: ns:buy_201107.data2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:32:59 [conn4] received splitChunk request: { splitChunk: "buy_201107.data2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data2-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:59 [conn4] created new distributed lock for buy_201107.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:59 [conn4] distributed lock 'buy_201107.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778b5a33af45ea50d934
m30001| Thu Jun 14 01:32:59 [conn4] splitChunk accepted at version 1|0||4fd9778b2cb587101f2e7a2c
m30001| Thu Jun 14 01:32:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:59-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651979887), what: "split", ns: "buy_201107.data2", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9778b2cb587101f2e7a2c') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9778b2cb587101f2e7a2c') } } }
m30001| Thu Jun 14 01:32:59 [conn4] distributed lock 'buy_201107.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:32:59 [conn] ChunkManager: time to load chunks for buy_201107.data2: 0ms sequenceNumber: 24 version: 1|2||4fd9778b2cb587101f2e7a2c based on: 1|0||4fd9778b2cb587101f2e7a2c
m30999| Thu Jun 14 01:32:59 [conn] CMD: movechunk: { movechunk: "buy_201107.data2", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:32:59 [conn] moving chunk ns: buy_201107.data2 moving ( ns:buy_201107.data2 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:32:59 [conn4] received moveChunk request: { moveChunk: "buy_201107.data2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data2-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:32:59 [conn4] created new distributed lock for buy_201107.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:32:59 [conn4] distributed lock 'buy_201107.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778b5a33af45ea50d935
m30001| Thu Jun 14 01:32:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:32:59-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651979890), what: "moveChunk.start", ns: "buy_201107.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk request accepted at version 1|2||4fd9778b2cb587101f2e7a2c
m30001| Thu Jun 14 01:32:59 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:32:59 [migrateThread] build index buy_201107.data2 { _id: 1 }
m30000| Thu Jun 14 01:32:59 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:32:59 [migrateThread] info: creating collection buy_201107.data2 on add index
m30000| Thu Jun 14 01:32:59 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data2' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:00 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data2", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:00 [conn4] moveChunk setting version to: 2|0||4fd9778b2cb587101f2e7a2c
m30000| Thu Jun 14 01:33:00 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data2' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:00 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:00-13", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651980899), what: "moveChunk.to", ns: "buy_201107.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:33:00 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data2", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:00 [conn4] moveChunk updating self version to: 2|1||4fd9778b2cb587101f2e7a2c through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data2'
m30001| Thu Jun 14 01:33:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:00-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651980904), what: "moveChunk.commit", ns: "buy_201107.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:00 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:00 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:00 [conn4] distributed lock 'buy_201107.data2/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:00-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651980920), what: "moveChunk.from", ns: "buy_201107.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 16 } }
m30001| Thu Jun 14 01:33:00 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data2-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:4120 w:50922 reslen:37 1030ms
m30999| Thu Jun 14 01:33:00 [conn] ChunkManager: time to load chunks for buy_201107.data2: 0ms sequenceNumber: 25 version: 2|1||4fd9778b2cb587101f2e7a2c based on: 1|2||4fd9778b2cb587101f2e7a2c
{ "millis" : 1032, "ok" : 1 }
m30999| Thu Jun 14 01:33:00 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:00 [conn] enable sharding on: buy_201108.data2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:00 [conn] going to create 1 chunk(s) for: buy_201108.data2 using new epoch 4fd9778c2cb587101f2e7a2d
m30999| Thu Jun 14 01:33:00 [conn] ChunkManager: time to load chunks for buy_201108.data2: 0ms sequenceNumber: 26 version: 1|0||4fd9778c2cb587101f2e7a2d based on: (empty)
m30000| Thu Jun 14 01:33:00 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:00 [conn] splitting: buy_201108.data2 shard: ns:buy_201108.data2 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:00 [conn6] received splitChunk request: { splitChunk: "buy_201108.data2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data2-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:00 [conn6] created new distributed lock for buy_201108.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:00 [conn6] distributed lock 'buy_201108.data2/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9778c488b0031418cbf02
m30000| Thu Jun 14 01:33:00 [conn6] splitChunk accepted at version 1|0||4fd9778c2cb587101f2e7a2d
m30000| Thu Jun 14 01:33:00 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:00-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651980927), what: "split", ns: "buy_201108.data2", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9778c2cb587101f2e7a2d') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9778c2cb587101f2e7a2d') } } }
m30000| Thu Jun 14 01:33:00 [conn6] distributed lock 'buy_201108.data2/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:00 [conn] ChunkManager: time to load chunks for buy_201108.data2: 0ms sequenceNumber: 27 version: 1|2||4fd9778c2cb587101f2e7a2d based on: 1|0||4fd9778c2cb587101f2e7a2d
m30999| Thu Jun 14 01:33:00 [conn] CMD: movechunk: { movechunk: "buy_201108.data2", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:00 [conn] moving chunk ns: buy_201108.data2 moving ( ns:buy_201108.data2 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:00 [conn6] received moveChunk request: { moveChunk: "buy_201108.data2", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data2-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:00 [conn6] created new distributed lock for buy_201108.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:00 [conn6] distributed lock 'buy_201108.data2/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9778c488b0031418cbf03
m30000| Thu Jun 14 01:33:00 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:00-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651980930), what: "moveChunk.start", ns: "buy_201108.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:00 [conn6] moveChunk request accepted at version 1|2||4fd9778c2cb587101f2e7a2d
m30000| Thu Jun 14 01:33:00 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:00 [migrateThread] build index buy_201108.data2 { _id: 1 }
m30001| Thu Jun 14 01:33:00 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:00 [migrateThread] info: creating collection buy_201108.data2 on add index
m30001| Thu Jun 14 01:33:00 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data2' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:01 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data2", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:01 [conn6] moveChunk setting version to: 2|0||4fd9778c2cb587101f2e7a2d
m30001| Thu Jun 14 01:33:01 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data2' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:01 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:01-26", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651981940), what: "moveChunk.to", ns: "buy_201108.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 996 } }
m30000| Thu Jun 14 01:33:01 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data2", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:01 [conn6] moveChunk updating self version to: 2|1||4fd9778c2cb587101f2e7a2d through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data2'
m30000| Thu Jun 14 01:33:01 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:01-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651981944), what: "moveChunk.commit", ns: "buy_201108.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:01 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:01 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:01 [conn6] distributed lock 'buy_201108.data2/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:01 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:01-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651981954), what: "moveChunk.from", ns: "buy_201108.data2", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:01 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data2", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data2-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:8976 w:23769 reslen:37 1024ms
m30999| Thu Jun 14 01:33:01 [conn] ChunkManager: time to load chunks for buy_201108.data2: 0ms sequenceNumber: 28 version: 2|1||4fd9778c2cb587101f2e7a2d based on: 1|2||4fd9778c2cb587101f2e7a2d
{ "millis" : 1025, "ok" : 1 }
m30999| Thu Jun 14 01:33:01 [conn] CMD: shardcollection: { shardcollection: "buy.data3", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:01 [conn] enable sharding on: buy.data3 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:01 [conn] going to create 1 chunk(s) for: buy.data3 using new epoch 4fd9778d2cb587101f2e7a2e
m30999| Thu Jun 14 01:33:01 [conn] ChunkManager: time to load chunks for buy.data3: 0ms sequenceNumber: 29 version: 1|0||4fd9778d2cb587101f2e7a2e based on: (empty)
m30001| Thu Jun 14 01:33:01 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:01 [conn] splitting: buy.data3 shard: ns:buy.data3 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:01 [conn4] received splitChunk request: { splitChunk: "buy.data3", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data3-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:01 [conn4] created new distributed lock for buy.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:01 [conn4] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778d5a33af45ea50d936
m30001| Thu Jun 14 01:33:01 [conn4] splitChunk accepted at version 1|0||4fd9778d2cb587101f2e7a2e
m30001| Thu Jun 14 01:33:01 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:01-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651981961), what: "split", ns: "buy.data3", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9778d2cb587101f2e7a2e') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9778d2cb587101f2e7a2e') } } }
m30001| Thu Jun 14 01:33:01 [conn4] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:01 [conn] ChunkManager: time to load chunks for buy.data3: 0ms sequenceNumber: 30 version: 1|2||4fd9778d2cb587101f2e7a2e based on: 1|0||4fd9778d2cb587101f2e7a2e
m30999| Thu Jun 14 01:33:01 [conn] CMD: movechunk: { movechunk: "buy.data3", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:01 [conn] moving chunk ns: buy.data3 moving ( ns:buy.data3 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:01 [conn4] received moveChunk request: { moveChunk: "buy.data3", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data3-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:01 [conn4] created new distributed lock for buy.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:01 [conn4] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778d5a33af45ea50d937
m30001| Thu Jun 14 01:33:01 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:01-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651981964), what: "moveChunk.start", ns: "buy.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:01 [conn4] moveChunk request accepted at version 1|2||4fd9778d2cb587101f2e7a2e
m30001| Thu Jun 14 01:33:01 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:01 [migrateThread] build index buy.data3 { _id: 1 }
m30000| Thu Jun 14 01:33:01 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:01 [migrateThread] info: creating collection buy.data3 on add index
m30000| Thu Jun 14 01:33:01 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data3' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data3", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk setting version to: 2|0||4fd9778d2cb587101f2e7a2e
m30000| Thu Jun 14 01:33:02 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data3' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:02 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:02-18", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651982972), what: "moveChunk.to", ns: "buy.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data3", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk updating self version to: 2|1||4fd9778d2cb587101f2e7a2e through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data3'
m30001| Thu Jun 14 01:33:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:02-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651982976), what: "moveChunk.commit", ns: "buy.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:02 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:02 [conn4] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:02-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651982985), what: "moveChunk.from", ns: "buy.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:02 [conn4] command admin.$cmd command: { moveChunk: "buy.data3", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data3-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:4781 w:57966 reslen:37 1022ms
m30999| Thu Jun 14 01:33:02 [conn] ChunkManager: time to load chunks for buy.data3: 0ms sequenceNumber: 31 version: 2|1||4fd9778d2cb587101f2e7a2e based on: 1|2||4fd9778d2cb587101f2e7a2e
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:02 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data3", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:02 [conn] enable sharding on: buy_201107.data3 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:02 [conn] going to create 1 chunk(s) for: buy_201107.data3 using new epoch 4fd9778e2cb587101f2e7a2f
m30999| Thu Jun 14 01:33:02 [conn] ChunkManager: time to load chunks for buy_201107.data3: 0ms sequenceNumber: 32 version: 1|0||4fd9778e2cb587101f2e7a2f based on: (empty)
m30001| Thu Jun 14 01:33:02 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:02 [conn] splitting: buy_201107.data3 shard: ns:buy_201107.data3 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:02 [conn4] received splitChunk request: { splitChunk: "buy_201107.data3", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data3-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:02 [conn4] created new distributed lock for buy_201107.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:02 [conn4] distributed lock 'buy_201107.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778e5a33af45ea50d938
m30001| Thu Jun 14 01:33:02 [conn4] splitChunk accepted at version 1|0||4fd9778e2cb587101f2e7a2f
m30001| Thu Jun 14 01:33:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:02-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651982992), what: "split", ns: "buy_201107.data3", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9778e2cb587101f2e7a2f') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9778e2cb587101f2e7a2f') } } }
m30001| Thu Jun 14 01:33:02 [conn4] distributed lock 'buy_201107.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:02 [conn] ChunkManager: time to load chunks for buy_201107.data3: 0ms sequenceNumber: 33 version: 1|2||4fd9778e2cb587101f2e7a2f based on: 1|0||4fd9778e2cb587101f2e7a2f
m30999| Thu Jun 14 01:33:02 [conn] CMD: movechunk: { movechunk: "buy_201107.data3", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:02 [conn] moving chunk ns: buy_201107.data3 moving ( ns:buy_201107.data3 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:02 [conn4] received moveChunk request: { moveChunk: "buy_201107.data3", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data3-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:02 [conn4] created new distributed lock for buy_201107.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:02 [conn4] distributed lock 'buy_201107.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9778e5a33af45ea50d939
m30001| Thu Jun 14 01:33:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:02-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651982994), what: "moveChunk.start", ns: "buy_201107.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk request accepted at version 1|2||4fd9778e2cb587101f2e7a2f
m30001| Thu Jun 14 01:33:02 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:02 [migrateThread] build index buy_201107.data3 { _id: 1 }
m30000| Thu Jun 14 01:33:02 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:02 [migrateThread] info: creating collection buy_201107.data3 on add index
m30000| Thu Jun 14 01:33:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data3' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:03 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data3", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:03 [conn4] moveChunk setting version to: 2|0||4fd9778e2cb587101f2e7a2f
m30000| Thu Jun 14 01:33:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data3' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:04 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:04-19", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651984004), what: "moveChunk.to", ns: "buy_201107.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:33:04 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data3", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:04 [conn4] moveChunk updating self version to: 2|1||4fd9778e2cb587101f2e7a2f through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data3'
m30001| Thu Jun 14 01:33:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:04-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651984008), what: "moveChunk.commit", ns: "buy_201107.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:04 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:04 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:04 [conn4] distributed lock 'buy_201107.data3/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:04 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:04-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651984017), what: "moveChunk.from", ns: "buy_201107.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:04 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data3", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data3-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:5419 w:65373 reslen:37 1023ms
m30999| Thu Jun 14 01:33:04 [conn] ChunkManager: time to load chunks for buy_201107.data3: 0ms sequenceNumber: 34 version: 2|1||4fd9778e2cb587101f2e7a2f based on: 1|2||4fd9778e2cb587101f2e7a2f
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:04 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data3", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:04 [conn] enable sharding on: buy_201108.data3 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:04 [conn] going to create 1 chunk(s) for: buy_201108.data3 using new epoch 4fd977902cb587101f2e7a30
m30999| Thu Jun 14 01:33:04 [conn] ChunkManager: time to load chunks for buy_201108.data3: 0ms sequenceNumber: 35 version: 1|0||4fd977902cb587101f2e7a30 based on: (empty)
m30000| Thu Jun 14 01:33:04 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:04 [conn] splitting: buy_201108.data3 shard: ns:buy_201108.data3 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:04 [conn6] received splitChunk request: { splitChunk: "buy_201108.data3", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data3-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:04 [conn6] created new distributed lock for buy_201108.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:04 [conn6] distributed lock 'buy_201108.data3/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97790488b0031418cbf04
m30000| Thu Jun 14 01:33:04 [conn6] splitChunk accepted at version 1|0||4fd977902cb587101f2e7a30
m30000| Thu Jun 14 01:33:04 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:04-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651984024), what: "split", ns: "buy_201108.data3", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977902cb587101f2e7a30') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977902cb587101f2e7a30') } } }
m30000| Thu Jun 14 01:33:04 [conn6] distributed lock 'buy_201108.data3/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:04 [conn] ChunkManager: time to load chunks for buy_201108.data3: 0ms sequenceNumber: 36 version: 1|2||4fd977902cb587101f2e7a30 based on: 1|0||4fd977902cb587101f2e7a30
m30999| Thu Jun 14 01:33:04 [conn] CMD: movechunk: { movechunk: "buy_201108.data3", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:04 [conn] moving chunk ns: buy_201108.data3 moving ( ns:buy_201108.data3 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:04 [conn6] received moveChunk request: { moveChunk: "buy_201108.data3", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data3-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:04 [conn6] created new distributed lock for buy_201108.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:04 [conn6] distributed lock 'buy_201108.data3/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97790488b0031418cbf05
m30000| Thu Jun 14 01:33:04 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:04-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651984026), what: "moveChunk.start", ns: "buy_201108.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:04 [conn6] moveChunk request accepted at version 1|2||4fd977902cb587101f2e7a30
m30000| Thu Jun 14 01:33:04 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:04 [migrateThread] build index buy_201108.data3 { _id: 1 }
m30001| Thu Jun 14 01:33:04 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:04 [migrateThread] info: creating collection buy_201108.data3 on add index
m30001| Thu Jun 14 01:33:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data3' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:05 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data3", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:05 [conn6] moveChunk setting version to: 2|0||4fd977902cb587101f2e7a30
m30001| Thu Jun 14 01:33:05 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data3' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:05 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:05-35", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651985036), what: "moveChunk.to", ns: "buy_201108.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 996 } }
m30000| Thu Jun 14 01:33:05 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data3", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:05 [conn6] moveChunk updating self version to: 2|1||4fd977902cb587101f2e7a30 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data3'
m30000| Thu Jun 14 01:33:05 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:05-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651985040), what: "moveChunk.commit", ns: "buy_201108.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:05 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:05 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:05 [conn6] distributed lock 'buy_201108.data3/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:05 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:05-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651985049), what: "moveChunk.from", ns: "buy_201108.data3", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:05 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data3", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data3-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:11705 w:31160 reslen:37 1023ms
m30999| Thu Jun 14 01:33:05 [conn] ChunkManager: time to load chunks for buy_201108.data3: 0ms sequenceNumber: 37 version: 2|1||4fd977902cb587101f2e7a30 based on: 1|2||4fd977902cb587101f2e7a30
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:05 [conn] CMD: shardcollection: { shardcollection: "buy.data4", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:05 [conn] enable sharding on: buy.data4 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:05 [conn] going to create 1 chunk(s) for: buy.data4 using new epoch 4fd977912cb587101f2e7a31
m30999| Thu Jun 14 01:33:05 [conn] ChunkManager: time to load chunks for buy.data4: 0ms sequenceNumber: 38 version: 1|0||4fd977912cb587101f2e7a31 based on: (empty)
m30001| Thu Jun 14 01:33:05 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:05 [conn] splitting: buy.data4 shard: ns:buy.data4 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:05 [conn4] received splitChunk request: { splitChunk: "buy.data4", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data4-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:05 [conn4] created new distributed lock for buy.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:05 [conn4] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977915a33af45ea50d93a
m30001| Thu Jun 14 01:33:05 [conn4] splitChunk accepted at version 1|0||4fd977912cb587101f2e7a31
m30001| Thu Jun 14 01:33:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:05-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651985056), what: "split", ns: "buy.data4", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977912cb587101f2e7a31') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977912cb587101f2e7a31') } } }
m30001| Thu Jun 14 01:33:05 [conn4] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:05 [conn] ChunkManager: time to load chunks for buy.data4: 0ms sequenceNumber: 39 version: 1|2||4fd977912cb587101f2e7a31 based on: 1|0||4fd977912cb587101f2e7a31
m30999| Thu Jun 14 01:33:05 [conn] CMD: movechunk: { movechunk: "buy.data4", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:05 [conn] moving chunk ns: buy.data4 moving ( ns:buy.data4 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:05 [conn4] received moveChunk request: { moveChunk: "buy.data4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data4-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:05 [conn4] created new distributed lock for buy.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:05 [conn4] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977915a33af45ea50d93b
m30001| Thu Jun 14 01:33:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:05-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651985059), what: "moveChunk.start", ns: "buy.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:05 [conn4] moveChunk request accepted at version 1|2||4fd977912cb587101f2e7a31
m30001| Thu Jun 14 01:33:05 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:05 [migrateThread] build index buy.data4 { _id: 1 }
m30000| Thu Jun 14 01:33:05 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:05 [migrateThread] info: creating collection buy.data4 on add index
m30000| Thu Jun 14 01:33:05 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data4' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data4", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk setting version to: 2|0||4fd977912cb587101f2e7a31
m30000| Thu Jun 14 01:33:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data4' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:06 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:06-24", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651986068), what: "moveChunk.to", ns: "buy.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data4", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk updating self version to: 2|1||4fd977912cb587101f2e7a31 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data4'
m30001| Thu Jun 14 01:33:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:06-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651986072), what: "moveChunk.commit", ns: "buy.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:06 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:06 [conn4] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:06-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651986081), what: "moveChunk.from", ns: "buy.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:06 [conn4] command admin.$cmd command: { moveChunk: "buy.data4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data4-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:6073 w:72400 reslen:37 1022ms
m30999| Thu Jun 14 01:33:06 [conn] ChunkManager: time to load chunks for buy.data4: 0ms sequenceNumber: 40 version: 2|1||4fd977912cb587101f2e7a31 based on: 1|2||4fd977912cb587101f2e7a31
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:06 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data4", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:06 [conn] enable sharding on: buy_201107.data4 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:06 [conn] going to create 1 chunk(s) for: buy_201107.data4 using new epoch 4fd977922cb587101f2e7a32
m30999| Thu Jun 14 01:33:06 [conn] ChunkManager: time to load chunks for buy_201107.data4: 0ms sequenceNumber: 41 version: 1|0||4fd977922cb587101f2e7a32 based on: (empty)
m30001| Thu Jun 14 01:33:06 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:06 [conn] splitting: buy_201107.data4 shard: ns:buy_201107.data4 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:06 [conn4] received splitChunk request: { splitChunk: "buy_201107.data4", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data4-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:06 [conn4] created new distributed lock for buy_201107.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:06 [conn4] distributed lock 'buy_201107.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977925a33af45ea50d93c
m30001| Thu Jun 14 01:33:06 [conn4] splitChunk accepted at version 1|0||4fd977922cb587101f2e7a32
m30001| Thu Jun 14 01:33:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:06-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651986088), what: "split", ns: "buy_201107.data4", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977922cb587101f2e7a32') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977922cb587101f2e7a32') } } }
m30001| Thu Jun 14 01:33:06 [conn4] distributed lock 'buy_201107.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:06 [conn] ChunkManager: time to load chunks for buy_201107.data4: 0ms sequenceNumber: 42 version: 1|2||4fd977922cb587101f2e7a32 based on: 1|0||4fd977922cb587101f2e7a32
m30999| Thu Jun 14 01:33:06 [conn] CMD: movechunk: { movechunk: "buy_201107.data4", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:06 [conn] moving chunk ns: buy_201107.data4 moving ( ns:buy_201107.data4 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:06 [conn4] received moveChunk request: { moveChunk: "buy_201107.data4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data4-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:06 [conn4] created new distributed lock for buy_201107.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:06 [conn4] distributed lock 'buy_201107.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977925a33af45ea50d93d
m30001| Thu Jun 14 01:33:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:06-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651986091), what: "moveChunk.start", ns: "buy_201107.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk request accepted at version 1|2||4fd977922cb587101f2e7a32
m30001| Thu Jun 14 01:33:06 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:06 [migrateThread] build index buy_201107.data4 { _id: 1 }
m30000| Thu Jun 14 01:33:06 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:06 [migrateThread] info: creating collection buy_201107.data4 on add index
m30000| Thu Jun 14 01:33:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data4' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:07 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data4", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:07 [conn4] moveChunk setting version to: 2|0||4fd977922cb587101f2e7a32
m30000| Thu Jun 14 01:33:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data4' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:07 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:07-25", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651987100), what: "moveChunk.to", ns: "buy_201107.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:33:07 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data4", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:07 [conn4] moveChunk updating self version to: 2|1||4fd977922cb587101f2e7a32 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data4'
m30001| Thu Jun 14 01:33:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:07-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651987104), what: "moveChunk.commit", ns: "buy_201107.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:07 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:07 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:07 [conn4] distributed lock 'buy_201107.data4/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:07-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651987115), what: "moveChunk.from", ns: "buy_201107.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 10 } }
m30001| Thu Jun 14 01:33:07 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data4-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:6714 w:80915 reslen:37 1025ms
m30999| Thu Jun 14 01:33:07 [conn] ChunkManager: time to load chunks for buy_201107.data4: 0ms sequenceNumber: 43 version: 2|1||4fd977922cb587101f2e7a32 based on: 1|2||4fd977922cb587101f2e7a32
{ "millis" : 1025, "ok" : 1 }
m30999| Thu Jun 14 01:33:07 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data4", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:07 [conn] enable sharding on: buy_201108.data4 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:07 [conn] going to create 1 chunk(s) for: buy_201108.data4 using new epoch 4fd977932cb587101f2e7a33
m30999| Thu Jun 14 01:33:07 [conn] ChunkManager: time to load chunks for buy_201108.data4: 0ms sequenceNumber: 44 version: 1|0||4fd977932cb587101f2e7a33 based on: (empty)
m30000| Thu Jun 14 01:33:07 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:07 [conn] splitting: buy_201108.data4 shard: ns:buy_201108.data4 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:07 [conn6] received splitChunk request: { splitChunk: "buy_201108.data4", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data4-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:07 [conn6] created new distributed lock for buy_201108.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:07 [conn6] distributed lock 'buy_201108.data4/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97793488b0031418cbf06
m30000| Thu Jun 14 01:33:07 [conn6] splitChunk accepted at version 1|0||4fd977932cb587101f2e7a33
m30000| Thu Jun 14 01:33:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:07-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651987121), what: "split", ns: "buy_201108.data4", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977932cb587101f2e7a33') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977932cb587101f2e7a33') } } }
m30000| Thu Jun 14 01:33:07 [conn6] distributed lock 'buy_201108.data4/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:07 [conn] ChunkManager: time to load chunks for buy_201108.data4: 0ms sequenceNumber: 45 version: 1|2||4fd977932cb587101f2e7a33 based on: 1|0||4fd977932cb587101f2e7a33
m30999| Thu Jun 14 01:33:07 [conn] CMD: movechunk: { movechunk: "buy_201108.data4", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:07 [conn] moving chunk ns: buy_201108.data4 moving ( ns:buy_201108.data4 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:07 [conn6] received moveChunk request: { moveChunk: "buy_201108.data4", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data4-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:07 [conn6] created new distributed lock for buy_201108.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:07 [conn6] distributed lock 'buy_201108.data4/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97793488b0031418cbf07
m30000| Thu Jun 14 01:33:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:07-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651987124), what: "moveChunk.start", ns: "buy_201108.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:07 [conn6] moveChunk request accepted at version 1|2||4fd977932cb587101f2e7a33
m30000| Thu Jun 14 01:33:07 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:07 [migrateThread] build index buy_201108.data4 { _id: 1 }
m30001| Thu Jun 14 01:33:07 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:07 [migrateThread] info: creating collection buy_201108.data4 on add index
m30001| Thu Jun 14 01:33:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data4' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:08 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data4", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:08 [conn6] moveChunk setting version to: 2|0||4fd977932cb587101f2e7a33
m30001| Thu Jun 14 01:33:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data4' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:08 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:08-44", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651988132), what: "moveChunk.to", ns: "buy_201108.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30000| Thu Jun 14 01:33:08 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data4", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:08 [conn6] moveChunk updating self version to: 2|1||4fd977932cb587101f2e7a33 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data4'
m30000| Thu Jun 14 01:33:08 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:08-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651988136), what: "moveChunk.commit", ns: "buy_201108.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:08 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:08 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:08 [conn6] distributed lock 'buy_201108.data4/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:08 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:08-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651988146), what: "moveChunk.from", ns: "buy_201108.data4", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:08 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data4", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data4-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:14406 w:38604 reslen:37 1022ms
m30999| Thu Jun 14 01:33:08 [conn] ChunkManager: time to load chunks for buy_201108.data4: 0ms sequenceNumber: 46 version: 2|1||4fd977932cb587101f2e7a33 based on: 1|2||4fd977932cb587101f2e7a33
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:08 [conn] CMD: shardcollection: { shardcollection: "buy.data5", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:08 [conn] enable sharding on: buy.data5 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:08 [conn] going to create 1 chunk(s) for: buy.data5 using new epoch 4fd977942cb587101f2e7a34
m30999| Thu Jun 14 01:33:08 [conn] ChunkManager: time to load chunks for buy.data5: 0ms sequenceNumber: 47 version: 1|0||4fd977942cb587101f2e7a34 based on: (empty)
m30001| Thu Jun 14 01:33:08 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:08 [conn] splitting: buy.data5 shard: ns:buy.data5 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:08 [conn4] received splitChunk request: { splitChunk: "buy.data5", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data5-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:08 [conn4] created new distributed lock for buy.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:08 [conn4] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977945a33af45ea50d93e
m30001| Thu Jun 14 01:33:08 [conn4] splitChunk accepted at version 1|0||4fd977942cb587101f2e7a34
m30001| Thu Jun 14 01:33:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:08-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651988152), what: "split", ns: "buy.data5", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977942cb587101f2e7a34') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977942cb587101f2e7a34') } } }
m30001| Thu Jun 14 01:33:08 [conn4] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:08 [conn] ChunkManager: time to load chunks for buy.data5: 0ms sequenceNumber: 48 version: 1|2||4fd977942cb587101f2e7a34 based on: 1|0||4fd977942cb587101f2e7a34
m30999| Thu Jun 14 01:33:08 [conn] CMD: movechunk: { movechunk: "buy.data5", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:08 [conn] moving chunk ns: buy.data5 moving ( ns:buy.data5 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:08 [conn4] received moveChunk request: { moveChunk: "buy.data5", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data5-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:08 [conn4] created new distributed lock for buy.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:08 [conn4] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977945a33af45ea50d93f
m30001| Thu Jun 14 01:33:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:08-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651988155), what: "moveChunk.start", ns: "buy.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:08 [conn4] moveChunk request accepted at version 1|2||4fd977942cb587101f2e7a34
m30001| Thu Jun 14 01:33:08 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:08 [migrateThread] build index buy.data5 { _id: 1 }
m30000| Thu Jun 14 01:33:08 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:08 [migrateThread] info: creating collection buy.data5 on add index
m30000| Thu Jun 14 01:33:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data5' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data5", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk setting version to: 2|0||4fd977942cb587101f2e7a34
m30000| Thu Jun 14 01:33:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data5' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:09 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:09-30", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651989164), what: "moveChunk.to", ns: "buy.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data5", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk updating self version to: 2|1||4fd977942cb587101f2e7a34 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data5'
m30001| Thu Jun 14 01:33:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:09-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651989168), what: "moveChunk.commit", ns: "buy.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:09 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:09 [conn4] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:09-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651989177), what: "moveChunk.from", ns: "buy.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:09 [conn4] command admin.$cmd command: { moveChunk: "buy.data5", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data5-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:7365 w:88023 reslen:37 1022ms
m30999| Thu Jun 14 01:33:09 [conn] ChunkManager: time to load chunks for buy.data5: 0ms sequenceNumber: 49 version: 2|1||4fd977942cb587101f2e7a34 based on: 1|2||4fd977942cb587101f2e7a34
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:09 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data5", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:09 [conn] enable sharding on: buy_201107.data5 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:09 [conn] going to create 1 chunk(s) for: buy_201107.data5 using new epoch 4fd977952cb587101f2e7a35
m30999| Thu Jun 14 01:33:09 [conn] ChunkManager: time to load chunks for buy_201107.data5: 0ms sequenceNumber: 50 version: 1|0||4fd977952cb587101f2e7a35 based on: (empty)
m30001| Thu Jun 14 01:33:09 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:09 [conn] splitting: buy_201107.data5 shard: ns:buy_201107.data5 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:09 [conn4] received splitChunk request: { splitChunk: "buy_201107.data5", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data5-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:09 [conn4] created new distributed lock for buy_201107.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:09 [conn4] distributed lock 'buy_201107.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977955a33af45ea50d940
m30001| Thu Jun 14 01:33:09 [conn4] splitChunk accepted at version 1|0||4fd977952cb587101f2e7a35
m30001| Thu Jun 14 01:33:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:09-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651989184), what: "split", ns: "buy_201107.data5", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977952cb587101f2e7a35') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977952cb587101f2e7a35') } } }
m30001| Thu Jun 14 01:33:09 [conn4] distributed lock 'buy_201107.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:09 [conn] ChunkManager: time to load chunks for buy_201107.data5: 0ms sequenceNumber: 51 version: 1|2||4fd977952cb587101f2e7a35 based on: 1|0||4fd977952cb587101f2e7a35
m30999| Thu Jun 14 01:33:09 [conn] CMD: movechunk: { movechunk: "buy_201107.data5", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:09 [conn] moving chunk ns: buy_201107.data5 moving ( ns:buy_201107.data5 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:09 [conn4] received moveChunk request: { moveChunk: "buy_201107.data5", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data5-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:09 [conn4] created new distributed lock for buy_201107.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:09 [conn4] distributed lock 'buy_201107.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977955a33af45ea50d941
m30001| Thu Jun 14 01:33:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:09-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651989187), what: "moveChunk.start", ns: "buy_201107.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk request accepted at version 1|2||4fd977952cb587101f2e7a35
m30001| Thu Jun 14 01:33:09 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:09 [migrateThread] build index buy_201107.data5 { _id: 1 }
m30000| Thu Jun 14 01:33:09 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:09 [migrateThread] info: creating collection buy_201107.data5 on add index
m30000| Thu Jun 14 01:33:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data5' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:10 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data5", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:10 [conn4] moveChunk setting version to: 2|0||4fd977952cb587101f2e7a35
m30000| Thu Jun 14 01:33:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data5' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:10 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:10-31", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651990204), what: "moveChunk.to", ns: "buy_201107.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 1004 } }
m30001| Thu Jun 14 01:33:10 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data5", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:10 [conn4] moveChunk updating self version to: 2|1||4fd977952cb587101f2e7a35 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data5'
m30001| Thu Jun 14 01:33:10 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:10-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651990208), what: "moveChunk.commit", ns: "buy_201107.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:10 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:10 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:10 [conn4] distributed lock 'buy_201107.data5/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:10 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:10-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651990218), what: "moveChunk.from", ns: "buy_201107.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1004, step5 of 6: 16, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:10 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data5", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data5-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:7996 w:95518 reslen:37 1031ms
m30999| Thu Jun 14 01:33:10 [conn] ChunkManager: time to load chunks for buy_201107.data5: 0ms sequenceNumber: 52 version: 2|1||4fd977952cb587101f2e7a35 based on: 1|2||4fd977952cb587101f2e7a35
{ "millis" : 1032, "ok" : 1 }
m30999| Thu Jun 14 01:33:10 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data5", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:10 [conn] enable sharding on: buy_201108.data5 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:10 [conn] going to create 1 chunk(s) for: buy_201108.data5 using new epoch 4fd977962cb587101f2e7a36
m30999| Thu Jun 14 01:33:10 [conn] ChunkManager: time to load chunks for buy_201108.data5: 0ms sequenceNumber: 53 version: 1|0||4fd977962cb587101f2e7a36 based on: (empty)
m30000| Thu Jun 14 01:33:10 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:10 [conn] splitting: buy_201108.data5 shard: ns:buy_201108.data5 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:10 [conn6] received splitChunk request: { splitChunk: "buy_201108.data5", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data5-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:10 [conn6] created new distributed lock for buy_201108.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:10 [conn6] distributed lock 'buy_201108.data5/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97796488b0031418cbf08
m30000| Thu Jun 14 01:33:10 [conn6] splitChunk accepted at version 1|0||4fd977962cb587101f2e7a36
m30000| Thu Jun 14 01:33:10 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:10-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651990224), what: "split", ns: "buy_201108.data5", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977962cb587101f2e7a36') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977962cb587101f2e7a36') } } }
m30000| Thu Jun 14 01:33:10 [conn6] distributed lock 'buy_201108.data5/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:10 [conn] ChunkManager: time to load chunks for buy_201108.data5: 0ms sequenceNumber: 54 version: 1|2||4fd977962cb587101f2e7a36 based on: 1|0||4fd977962cb587101f2e7a36
m30999| Thu Jun 14 01:33:10 [conn] CMD: movechunk: { movechunk: "buy_201108.data5", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:10 [conn] moving chunk ns: buy_201108.data5 moving ( ns:buy_201108.data5 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:10 [conn6] received moveChunk request: { moveChunk: "buy_201108.data5", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data5-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:10 [conn6] created new distributed lock for buy_201108.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:10 [conn6] distributed lock 'buy_201108.data5/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97796488b0031418cbf09
m30000| Thu Jun 14 01:33:10 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:10-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651990228), what: "moveChunk.start", ns: "buy_201108.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:10 [conn6] moveChunk request accepted at version 1|2||4fd977962cb587101f2e7a36
m30000| Thu Jun 14 01:33:10 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:10 [migrateThread] build index buy_201108.data5 { _id: 1 }
m30001| Thu Jun 14 01:33:10 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:10 [migrateThread] info: creating collection buy_201108.data5 on add index
m30001| Thu Jun 14 01:33:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data5' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:11 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data5", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:11 [conn6] moveChunk setting version to: 2|0||4fd977962cb587101f2e7a36
m30001| Thu Jun 14 01:33:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data5' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:11 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:11-53", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651991240), what: "moveChunk.to", ns: "buy_201108.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 999 } }
m30000| Thu Jun 14 01:33:11 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data5", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:11 [conn6] moveChunk updating self version to: 2|1||4fd977962cb587101f2e7a36 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data5'
m30000| Thu Jun 14 01:33:11 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:11-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651991245), what: "moveChunk.commit", ns: "buy_201108.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:11 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:11 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:11 [conn6] distributed lock 'buy_201108.data5/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:11 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:11-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651991254), what: "moveChunk.from", ns: "buy_201108.data5", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:11 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data5", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data5-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:17323 w:46245 reslen:37 1027ms
m30999| Thu Jun 14 01:33:11 [conn] ChunkManager: time to load chunks for buy_201108.data5: 0ms sequenceNumber: 55 version: 2|1||4fd977962cb587101f2e7a36 based on: 1|2||4fd977962cb587101f2e7a36
{ "millis" : 1029, "ok" : 1 }
m30999| Thu Jun 14 01:33:11 [conn] CMD: shardcollection: { shardcollection: "buy.data6", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:11 [conn] enable sharding on: buy.data6 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:11 [conn] going to create 1 chunk(s) for: buy.data6 using new epoch 4fd977972cb587101f2e7a37
m30999| Thu Jun 14 01:33:11 [conn] ChunkManager: time to load chunks for buy.data6: 0ms sequenceNumber: 56 version: 1|0||4fd977972cb587101f2e7a37 based on: (empty)
m30001| Thu Jun 14 01:33:11 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:11 [conn] splitting: buy.data6 shard: ns:buy.data6 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:11 [conn4] received splitChunk request: { splitChunk: "buy.data6", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data6-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:11 [conn4] created new distributed lock for buy.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:11 [conn4] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977975a33af45ea50d942
m30001| Thu Jun 14 01:33:11 [conn4] splitChunk accepted at version 1|0||4fd977972cb587101f2e7a37
m30001| Thu Jun 14 01:33:11 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:11-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651991263), what: "split", ns: "buy.data6", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977972cb587101f2e7a37') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977972cb587101f2e7a37') } } }
m30001| Thu Jun 14 01:33:11 [conn4] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:11 [conn] ChunkManager: time to load chunks for buy.data6: 0ms sequenceNumber: 57 version: 1|2||4fd977972cb587101f2e7a37 based on: 1|0||4fd977972cb587101f2e7a37
m30999| Thu Jun 14 01:33:11 [conn] CMD: movechunk: { movechunk: "buy.data6", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:11 [conn] moving chunk ns: buy.data6 moving ( ns:buy.data6 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:11 [conn4] received moveChunk request: { moveChunk: "buy.data6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data6-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:11 [conn4] created new distributed lock for buy.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:11 [conn4] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977975a33af45ea50d943
m30001| Thu Jun 14 01:33:11 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:11-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651991266), what: "moveChunk.start", ns: "buy.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:11 [conn4] moveChunk request accepted at version 1|2||4fd977972cb587101f2e7a37
m30001| Thu Jun 14 01:33:11 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:11 [migrateThread] build index buy.data6 { _id: 1 }
m30000| Thu Jun 14 01:33:11 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:11 [migrateThread] info: creating collection buy.data6 on add index
m30000| Thu Jun 14 01:33:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data6' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data6", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk setting version to: 2|0||4fd977972cb587101f2e7a37
m30000| Thu Jun 14 01:33:12 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data6' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:12 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:12-36", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651992276), what: "moveChunk.to", ns: "buy.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data6", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk updating self version to: 2|1||4fd977972cb587101f2e7a37 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data6'
m30001| Thu Jun 14 01:33:12 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:12-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651992281), what: "moveChunk.commit", ns: "buy.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:12 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:12 [conn4] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:12 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:12-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651992290), what: "moveChunk.from", ns: "buy.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:12 [conn4] command admin.$cmd command: { moveChunk: "buy.data6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data6-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:8647 w:102997 reslen:37 1024ms
m30999| Thu Jun 14 01:33:12 [conn] ChunkManager: time to load chunks for buy.data6: 0ms sequenceNumber: 58 version: 2|1||4fd977972cb587101f2e7a37 based on: 1|2||4fd977972cb587101f2e7a37
{ "millis" : 1025, "ok" : 1 }
m30999| Thu Jun 14 01:33:12 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data6", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:12 [conn] enable sharding on: buy_201107.data6 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:12 [conn] going to create 1 chunk(s) for: buy_201107.data6 using new epoch 4fd977982cb587101f2e7a38
m30999| Thu Jun 14 01:33:12 [conn] ChunkManager: time to load chunks for buy_201107.data6: 0ms sequenceNumber: 59 version: 1|0||4fd977982cb587101f2e7a38 based on: (empty)
m30001| Thu Jun 14 01:33:12 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:12 [conn] splitting: buy_201107.data6 shard: ns:buy_201107.data6 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:12 [conn4] received splitChunk request: { splitChunk: "buy_201107.data6", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data6-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:12 [conn4] created new distributed lock for buy_201107.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:12 [conn4] distributed lock 'buy_201107.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977985a33af45ea50d944
m30001| Thu Jun 14 01:33:12 [conn4] splitChunk accepted at version 1|0||4fd977982cb587101f2e7a38
m30001| Thu Jun 14 01:33:12 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:12-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651992298), what: "split", ns: "buy_201107.data6", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977982cb587101f2e7a38') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977982cb587101f2e7a38') } } }
m30001| Thu Jun 14 01:33:12 [conn4] distributed lock 'buy_201107.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:12 [conn] ChunkManager: time to load chunks for buy_201107.data6: 0ms sequenceNumber: 60 version: 1|2||4fd977982cb587101f2e7a38 based on: 1|0||4fd977982cb587101f2e7a38
m30999| Thu Jun 14 01:33:12 [conn] CMD: movechunk: { movechunk: "buy_201107.data6", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:12 [conn] moving chunk ns: buy_201107.data6 moving ( ns:buy_201107.data6 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:12 [conn4] received moveChunk request: { moveChunk: "buy_201107.data6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data6-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:12 [conn4] created new distributed lock for buy_201107.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:12 [conn4] distributed lock 'buy_201107.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977985a33af45ea50d945
m30001| Thu Jun 14 01:33:12 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:12-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651992301), what: "moveChunk.start", ns: "buy_201107.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk request accepted at version 1|2||4fd977982cb587101f2e7a38
m30001| Thu Jun 14 01:33:12 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:12 [migrateThread] build index buy_201107.data6 { _id: 1 }
m30000| Thu Jun 14 01:33:12 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:12 [migrateThread] info: creating collection buy_201107.data6 on add index
m30000| Thu Jun 14 01:33:12 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data6' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:13 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data6", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:13 [conn4] moveChunk setting version to: 2|0||4fd977982cb587101f2e7a38
m30000| Thu Jun 14 01:33:13 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data6' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:13 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:13-37", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651993312), what: "moveChunk.to", ns: "buy_201107.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:33:13 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data6", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:13 [conn4] moveChunk updating self version to: 2|1||4fd977982cb587101f2e7a38 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data6'
m30001| Thu Jun 14 01:33:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:13-60", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651993317), what: "moveChunk.commit", ns: "buy_201107.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:13 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:13 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:13 [conn4] distributed lock 'buy_201107.data6/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:13-61", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651993326), what: "moveChunk.from", ns: "buy_201107.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 9 } }
m30001| Thu Jun 14 01:33:13 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data6-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:9305 w:110588 reslen:37 1025ms
m30999| Thu Jun 14 01:33:13 [conn] ChunkManager: time to load chunks for buy_201107.data6: 0ms sequenceNumber: 61 version: 2|1||4fd977982cb587101f2e7a38 based on: 1|2||4fd977982cb587101f2e7a38
{ "millis" : 1026, "ok" : 1 }
m30999| Thu Jun 14 01:33:13 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data6", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:13 [conn] enable sharding on: buy_201108.data6 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:13 [conn] going to create 1 chunk(s) for: buy_201108.data6 using new epoch 4fd977992cb587101f2e7a39
m30999| Thu Jun 14 01:33:13 [conn] ChunkManager: time to load chunks for buy_201108.data6: 0ms sequenceNumber: 62 version: 1|0||4fd977992cb587101f2e7a39 based on: (empty)
m30000| Thu Jun 14 01:33:13 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:13 [conn] splitting: buy_201108.data6 shard: ns:buy_201108.data6 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:13 [conn6] received splitChunk request: { splitChunk: "buy_201108.data6", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data6-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:13 [conn6] created new distributed lock for buy_201108.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:13 [conn6] distributed lock 'buy_201108.data6/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97799488b0031418cbf0a
m30000| Thu Jun 14 01:33:13 [conn6] splitChunk accepted at version 1|0||4fd977992cb587101f2e7a39
m30000| Thu Jun 14 01:33:13 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:13-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651993334), what: "split", ns: "buy_201108.data6", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977992cb587101f2e7a39') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977992cb587101f2e7a39') } } }
m30000| Thu Jun 14 01:33:13 [conn6] distributed lock 'buy_201108.data6/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:13 [conn] ChunkManager: time to load chunks for buy_201108.data6: 0ms sequenceNumber: 63 version: 1|2||4fd977992cb587101f2e7a39 based on: 1|0||4fd977992cb587101f2e7a39
m30999| Thu Jun 14 01:33:13 [conn] CMD: movechunk: { movechunk: "buy_201108.data6", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:13 [conn] moving chunk ns: buy_201108.data6 moving ( ns:buy_201108.data6 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:13 [conn6] received moveChunk request: { moveChunk: "buy_201108.data6", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data6-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:13 [conn6] created new distributed lock for buy_201108.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:13 [conn6] distributed lock 'buy_201108.data6/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd97799488b0031418cbf0b
m30000| Thu Jun 14 01:33:13 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:13-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651993338), what: "moveChunk.start", ns: "buy_201108.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:13 [conn6] moveChunk request accepted at version 1|2||4fd977992cb587101f2e7a39
m30000| Thu Jun 14 01:33:13 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:13 [migrateThread] build index buy_201108.data6 { _id: 1 }
m30001| Thu Jun 14 01:33:13 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:13 [migrateThread] info: creating collection buy_201108.data6 on add index
m30001| Thu Jun 14 01:33:13 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data6' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:14 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data6", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:14 [conn6] moveChunk setting version to: 2|0||4fd977992cb587101f2e7a39
m30001| Thu Jun 14 01:33:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data6' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:14 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:14-62", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651994348), what: "moveChunk.to", ns: "buy_201108.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 997 } }
m30000| Thu Jun 14 01:33:14 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data6", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:14 [conn6] moveChunk updating self version to: 2|1||4fd977992cb587101f2e7a39 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data6'
m30000| Thu Jun 14 01:33:14 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:14-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651994353), what: "moveChunk.commit", ns: "buy_201108.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:14 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:14 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:14 [conn6] distributed lock 'buy_201108.data6/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:14 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:14-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651994363), what: "moveChunk.from", ns: "buy_201108.data6", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:14 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data6", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data6-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:20333 w:53975 reslen:37 1026ms
m30999| Thu Jun 14 01:33:14 [conn] ChunkManager: time to load chunks for buy_201108.data6: 0ms sequenceNumber: 64 version: 2|1||4fd977992cb587101f2e7a39 based on: 1|2||4fd977992cb587101f2e7a39
{ "millis" : 1027, "ok" : 1 }
m30999| Thu Jun 14 01:33:14 [conn] CMD: shardcollection: { shardcollection: "buy.data7", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:14 [conn] enable sharding on: buy.data7 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:14 [conn] going to create 1 chunk(s) for: buy.data7 using new epoch 4fd9779a2cb587101f2e7a3a
m30999| Thu Jun 14 01:33:14 [conn] ChunkManager: time to load chunks for buy.data7: 0ms sequenceNumber: 65 version: 1|0||4fd9779a2cb587101f2e7a3a based on: (empty)
m30001| Thu Jun 14 01:33:14 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:14 [conn] splitting: buy.data7 shard: ns:buy.data7 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:14 [conn4] received splitChunk request: { splitChunk: "buy.data7", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data7-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:14 [conn4] created new distributed lock for buy.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:14 [conn4] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779a5a33af45ea50d946
m30001| Thu Jun 14 01:33:14 [conn4] splitChunk accepted at version 1|0||4fd9779a2cb587101f2e7a3a
m30001| Thu Jun 14 01:33:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:14-63", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651994371), what: "split", ns: "buy.data7", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779a2cb587101f2e7a3a') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779a2cb587101f2e7a3a') } } }
m30001| Thu Jun 14 01:33:14 [conn4] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:14 [conn] ChunkManager: time to load chunks for buy.data7: 0ms sequenceNumber: 66 version: 1|2||4fd9779a2cb587101f2e7a3a based on: 1|0||4fd9779a2cb587101f2e7a3a
m30999| Thu Jun 14 01:33:14 [conn] CMD: movechunk: { movechunk: "buy.data7", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:14 [conn] moving chunk ns: buy.data7 moving ( ns:buy.data7 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:14 [conn4] received moveChunk request: { moveChunk: "buy.data7", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data7-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:14 [conn4] created new distributed lock for buy.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:14 [conn4] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779a5a33af45ea50d947
m30001| Thu Jun 14 01:33:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:14-64", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651994374), what: "moveChunk.start", ns: "buy.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:14 [conn4] moveChunk request accepted at version 1|2||4fd9779a2cb587101f2e7a3a
m30001| Thu Jun 14 01:33:14 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:14 [migrateThread] build index buy.data7 { _id: 1 }
m30000| Thu Jun 14 01:33:14 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:14 [migrateThread] info: creating collection buy.data7 on add index
m30000| Thu Jun 14 01:33:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data7' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data7", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk setting version to: 2|0||4fd9779a2cb587101f2e7a3a
m30000| Thu Jun 14 01:33:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data7' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:15 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:15-42", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651995384), what: "moveChunk.to", ns: "buy.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data7", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk updating self version to: 2|1||4fd9779a2cb587101f2e7a3a through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data7'
m30001| Thu Jun 14 01:33:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:15-65", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651995389), what: "moveChunk.commit", ns: "buy.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:15 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:15 [conn4] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:15-66", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651995398), what: "moveChunk.from", ns: "buy.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:15 [conn4] command admin.$cmd command: { moveChunk: "buy.data7", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data7-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:9956 w:117644 reslen:37 1024ms
m30999| Thu Jun 14 01:33:15 [conn] ChunkManager: time to load chunks for buy.data7: 0ms sequenceNumber: 67 version: 2|1||4fd9779a2cb587101f2e7a3a based on: 1|2||4fd9779a2cb587101f2e7a3a
{ "millis" : 1025, "ok" : 1 }
m30999| Thu Jun 14 01:33:15 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data7", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:15 [conn] enable sharding on: buy_201107.data7 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:15 [conn] going to create 1 chunk(s) for: buy_201107.data7 using new epoch 4fd9779b2cb587101f2e7a3b
m30999| Thu Jun 14 01:33:15 [conn] ChunkManager: time to load chunks for buy_201107.data7: 0ms sequenceNumber: 68 version: 1|0||4fd9779b2cb587101f2e7a3b based on: (empty)
m30001| Thu Jun 14 01:33:15 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:15 [conn] splitting: buy_201107.data7 shard: ns:buy_201107.data7 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:15 [conn4] received splitChunk request: { splitChunk: "buy_201107.data7", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data7-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:15 [conn4] created new distributed lock for buy_201107.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:15 [conn4] distributed lock 'buy_201107.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779b5a33af45ea50d948
m30001| Thu Jun 14 01:33:15 [conn4] splitChunk accepted at version 1|0||4fd9779b2cb587101f2e7a3b
m30001| Thu Jun 14 01:33:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:15-67", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651995405), what: "split", ns: "buy_201107.data7", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779b2cb587101f2e7a3b') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779b2cb587101f2e7a3b') } } }
m30001| Thu Jun 14 01:33:15 [conn4] distributed lock 'buy_201107.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:15 [conn] ChunkManager: time to load chunks for buy_201107.data7: 0ms sequenceNumber: 69 version: 1|2||4fd9779b2cb587101f2e7a3b based on: 1|0||4fd9779b2cb587101f2e7a3b
m30999| Thu Jun 14 01:33:15 [conn] CMD: movechunk: { movechunk: "buy_201107.data7", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:15 [conn] moving chunk ns: buy_201107.data7 moving ( ns:buy_201107.data7 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:15 [conn4] received moveChunk request: { moveChunk: "buy_201107.data7", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data7-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:15 [conn4] created new distributed lock for buy_201107.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:15 [conn4] distributed lock 'buy_201107.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779b5a33af45ea50d949
m30001| Thu Jun 14 01:33:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:15-68", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651995408), what: "moveChunk.start", ns: "buy_201107.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk request accepted at version 1|2||4fd9779b2cb587101f2e7a3b
m30001| Thu Jun 14 01:33:15 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:15 [migrateThread] build index buy_201107.data7 { _id: 1 }
m30000| Thu Jun 14 01:33:15 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:15 [migrateThread] info: creating collection buy_201107.data7 on add index
m30000| Thu Jun 14 01:33:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data7' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:16 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data7", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:16 [conn4] moveChunk setting version to: 2|0||4fd9779b2cb587101f2e7a3b
m30000| Thu Jun 14 01:33:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data7' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:16 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:16-43", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651996416), what: "moveChunk.to", ns: "buy_201107.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:33:16 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data7", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:16 [conn4] moveChunk updating self version to: 2|1||4fd9779b2cb587101f2e7a3b through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data7'
m30001| Thu Jun 14 01:33:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:16-69", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651996421), what: "moveChunk.commit", ns: "buy_201107.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:16 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:16 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:16 [conn4] distributed lock 'buy_201107.data7/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:16-70", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651996430), what: "moveChunk.from", ns: "buy_201107.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:16 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data7", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data7-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:10592 w:125087 reslen:37 1022ms
m30999| Thu Jun 14 01:33:16 [conn] ChunkManager: time to load chunks for buy_201107.data7: 0ms sequenceNumber: 70 version: 2|1||4fd9779b2cb587101f2e7a3b based on: 1|2||4fd9779b2cb587101f2e7a3b
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:16 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data7", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:16 [conn] enable sharding on: buy_201108.data7 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:16 [conn] going to create 1 chunk(s) for: buy_201108.data7 using new epoch 4fd9779c2cb587101f2e7a3c
m30999| Thu Jun 14 01:33:16 [conn] ChunkManager: time to load chunks for buy_201108.data7: 0ms sequenceNumber: 71 version: 1|0||4fd9779c2cb587101f2e7a3c based on: (empty)
m30000| Thu Jun 14 01:33:16 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:16 [conn] splitting: buy_201108.data7 shard: ns:buy_201108.data7 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:16 [conn6] received splitChunk request: { splitChunk: "buy_201108.data7", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data7-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:16 [conn6] created new distributed lock for buy_201108.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:16 [conn6] distributed lock 'buy_201108.data7/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9779c488b0031418cbf0c
m30000| Thu Jun 14 01:33:16 [conn6] splitChunk accepted at version 1|0||4fd9779c2cb587101f2e7a3c
m30000| Thu Jun 14 01:33:16 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:16-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651996437), what: "split", ns: "buy_201108.data7", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779c2cb587101f2e7a3c') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779c2cb587101f2e7a3c') } } }
m30000| Thu Jun 14 01:33:16 [conn6] distributed lock 'buy_201108.data7/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:16 [conn] ChunkManager: time to load chunks for buy_201108.data7: 0ms sequenceNumber: 72 version: 1|2||4fd9779c2cb587101f2e7a3c based on: 1|0||4fd9779c2cb587101f2e7a3c
m30999| Thu Jun 14 01:33:16 [conn] CMD: movechunk: { movechunk: "buy_201108.data7", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:16 [conn] moving chunk ns: buy_201108.data7 moving ( ns:buy_201108.data7 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:16 [conn6] received moveChunk request: { moveChunk: "buy_201108.data7", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data7-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:16 [conn6] created new distributed lock for buy_201108.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:16 [conn6] distributed lock 'buy_201108.data7/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9779c488b0031418cbf0d
m30000| Thu Jun 14 01:33:16 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:16-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651996440), what: "moveChunk.start", ns: "buy_201108.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:16 [conn6] moveChunk request accepted at version 1|2||4fd9779c2cb587101f2e7a3c
m30000| Thu Jun 14 01:33:16 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:16 [migrateThread] build index buy_201108.data7 { _id: 1 }
m30001| Thu Jun 14 01:33:16 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:16 [migrateThread] info: creating collection buy_201108.data7 on add index
m30001| Thu Jun 14 01:33:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data7' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:17 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data7", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:17 [conn6] moveChunk setting version to: 2|0||4fd9779c2cb587101f2e7a3c
m30001| Thu Jun 14 01:33:17 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data7' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:17 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:17-71", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651997448), what: "moveChunk.to", ns: "buy_201108.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 996 } }
m30000| Thu Jun 14 01:33:17 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data7", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:17 [conn6] moveChunk updating self version to: 2|1||4fd9779c2cb587101f2e7a3c through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data7'
m30000| Thu Jun 14 01:33:17 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:17-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651997453), what: "moveChunk.commit", ns: "buy_201108.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:17 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:17 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:17 [conn6] distributed lock 'buy_201108.data7/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:17 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:17-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651997463), what: "moveChunk.from", ns: "buy_201108.data7", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:17 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data7", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data7-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:23321 w:61725 reslen:37 1023ms
m30999| Thu Jun 14 01:33:17 [conn] ChunkManager: time to load chunks for buy_201108.data7: 0ms sequenceNumber: 73 version: 2|1||4fd9779c2cb587101f2e7a3c based on: 1|2||4fd9779c2cb587101f2e7a3c
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:17 [conn] CMD: shardcollection: { shardcollection: "buy.data8", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:17 [conn] enable sharding on: buy.data8 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:17 [conn] going to create 1 chunk(s) for: buy.data8 using new epoch 4fd9779d2cb587101f2e7a3d
m30999| Thu Jun 14 01:33:17 [conn] ChunkManager: time to load chunks for buy.data8: 0ms sequenceNumber: 74 version: 1|0||4fd9779d2cb587101f2e7a3d based on: (empty)
m30001| Thu Jun 14 01:33:17 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:17 [conn] splitting: buy.data8 shard: ns:buy.data8 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:17 [conn4] received splitChunk request: { splitChunk: "buy.data8", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data8-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:17 [conn4] created new distributed lock for buy.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:17 [conn4] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779d5a33af45ea50d94a
m30001| Thu Jun 14 01:33:17 [conn4] splitChunk accepted at version 1|0||4fd9779d2cb587101f2e7a3d
m30001| Thu Jun 14 01:33:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:17-72", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651997470), what: "split", ns: "buy.data8", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779d2cb587101f2e7a3d') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779d2cb587101f2e7a3d') } } }
m30001| Thu Jun 14 01:33:17 [conn4] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:17 [conn] ChunkManager: time to load chunks for buy.data8: 0ms sequenceNumber: 75 version: 1|2||4fd9779d2cb587101f2e7a3d based on: 1|0||4fd9779d2cb587101f2e7a3d
m30999| Thu Jun 14 01:33:17 [conn] CMD: movechunk: { movechunk: "buy.data8", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:17 [conn] moving chunk ns: buy.data8 moving ( ns:buy.data8 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:17 [conn4] received moveChunk request: { moveChunk: "buy.data8", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data8-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:17 [conn4] created new distributed lock for buy.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:17 [conn4] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779d5a33af45ea50d94b
m30001| Thu Jun 14 01:33:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:17-73", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651997473), what: "moveChunk.start", ns: "buy.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:17 [conn4] moveChunk request accepted at version 1|2||4fd9779d2cb587101f2e7a3d
m30001| Thu Jun 14 01:33:17 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:17 [migrateThread] build index buy.data8 { _id: 1 }
m30000| Thu Jun 14 01:33:17 [migrateThreadThu Jun 14 01:33:17 [clientcursormon] mem (MB) res:16 virt:120 mapped:0
] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:17 [migrateThread] info: creating collection buy.data8 on add index
m30000| Thu Jun 14 01:33:17 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data8' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data8", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk setting version to: 2|0||4fd9779d2cb587101f2e7a3d
m30000| Thu Jun 14 01:33:18 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data8' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:18 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:18-48", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651998480), what: "moveChunk.to", ns: "buy.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 994 } }
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data8", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk updating self version to: 2|1||4fd9779d2cb587101f2e7a3d through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data8'
m30001| Thu Jun 14 01:33:18 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:18-74", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651998485), what: "moveChunk.commit", ns: "buy.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:18 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:18 [conn4] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:18 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:18-75", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651998494), what: "moveChunk.from", ns: "buy.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:18 [conn4] command admin.$cmd command: { moveChunk: "buy.data8", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data8-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:11280 w:132194 reslen:37 1021ms
m30999| Thu Jun 14 01:33:18 [conn] ChunkManager: time to load chunks for buy.data8: 0ms sequenceNumber: 76 version: 2|1||4fd9779d2cb587101f2e7a3d based on: 1|2||4fd9779d2cb587101f2e7a3d
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:18 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data8", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:18 [conn] enable sharding on: buy_201107.data8 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:18 [conn] going to create 1 chunk(s) for: buy_201107.data8 using new epoch 4fd9779e2cb587101f2e7a3e
m30999| Thu Jun 14 01:33:18 [conn] ChunkManager: time to load chunks for buy_201107.data8: 0ms sequenceNumber: 77 version: 1|0||4fd9779e2cb587101f2e7a3e based on: (empty)
m30001| Thu Jun 14 01:33:18 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:18 [conn] splitting: buy_201107.data8 shard: ns:buy_201107.data8 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:18 [conn4] received splitChunk request: { splitChunk: "buy_201107.data8", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data8-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:18 [conn4] created new distributed lock for buy_201107.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:18 [conn4] distributed lock 'buy_201107.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779e5a33af45ea50d94c
m30001| Thu Jun 14 01:33:18 [conn4] splitChunk accepted at version 1|0||4fd9779e2cb587101f2e7a3e
m30001| Thu Jun 14 01:33:18 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:18-76", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651998501), what: "split", ns: "buy_201107.data8", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779e2cb587101f2e7a3e') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779e2cb587101f2e7a3e') } } }
m30001| Thu Jun 14 01:33:18 [conn4] distributed lock 'buy_201107.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:18 [conn] ChunkManager: time to load chunks for buy_201107.data8: 0ms sequenceNumber: 78 version: 1|2||4fd9779e2cb587101f2e7a3e based on: 1|0||4fd9779e2cb587101f2e7a3e
m30999| Thu Jun 14 01:33:18 [conn] CMD: movechunk: { movechunk: "buy_201107.data8", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:18 [conn] moving chunk ns: buy_201107.data8 moving ( ns:buy_201107.data8 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:18 [conn4] received moveChunk request: { moveChunk: "buy_201107.data8", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data8-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:18 [conn4] created new distributed lock for buy_201107.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:18 [conn4] distributed lock 'buy_201107.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd9779e5a33af45ea50d94d
m30001| Thu Jun 14 01:33:18 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:18-77", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651998504), what: "moveChunk.start", ns: "buy_201107.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk request accepted at version 1|2||4fd9779e2cb587101f2e7a3e
m30001| Thu Jun 14 01:33:18 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:18 [migrateThread] build index buy_201107.data8 { _id: 1 }
m30000| Thu Jun 14 01:33:18 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:18 [migrateThread] info: creating collection buy_201107.data8 on add index
m30000| Thu Jun 14 01:33:18 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data8' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:19 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data8", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:19 [conn4] moveChunk setting version to: 2|0||4fd9779e2cb587101f2e7a3e
m30000| Thu Jun 14 01:33:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data8' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:19 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:19-49", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339651999513), what: "moveChunk.to", ns: "buy_201107.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:33:19 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data8", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:19 [conn4] moveChunk updating self version to: 2|1||4fd9779e2cb587101f2e7a3e through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data8'
m30001| Thu Jun 14 01:33:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:19-78", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651999517), what: "moveChunk.commit", ns: "buy_201107.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:19 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:19 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:19 [conn4] distributed lock 'buy_201107.data8/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:19-79", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339651999527), what: "moveChunk.from", ns: "buy_201107.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 9 } }
m30001| Thu Jun 14 01:33:19 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data8", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data8-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:11943 w:139846 reslen:37 1023ms
m30999| Thu Jun 14 01:33:19 [conn] ChunkManager: time to load chunks for buy_201107.data8: 0ms sequenceNumber: 79 version: 2|1||4fd9779e2cb587101f2e7a3e based on: 1|2||4fd9779e2cb587101f2e7a3e
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:19 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data8", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:19 [conn] enable sharding on: buy_201108.data8 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:19 [conn] going to create 1 chunk(s) for: buy_201108.data8 using new epoch 4fd9779f2cb587101f2e7a3f
m30999| Thu Jun 14 01:33:19 [conn] ChunkManager: time to load chunks for buy_201108.data8: 0ms sequenceNumber: 80 version: 1|0||4fd9779f2cb587101f2e7a3f based on: (empty)
m30000| Thu Jun 14 01:33:19 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:19 [conn] splitting: buy_201108.data8 shard: ns:buy_201108.data8 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:19 [conn6] received splitChunk request: { splitChunk: "buy_201108.data8", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data8-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:19 [conn6] created new distributed lock for buy_201108.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:19 [conn6] distributed lock 'buy_201108.data8/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9779f488b0031418cbf0e
m30000| Thu Jun 14 01:33:19 [conn6] splitChunk accepted at version 1|0||4fd9779f2cb587101f2e7a3f
m30000| Thu Jun 14 01:33:19 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:19-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651999533), what: "split", ns: "buy_201108.data8", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9779f2cb587101f2e7a3f') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9779f2cb587101f2e7a3f') } } }
m30000| Thu Jun 14 01:33:19 [conn6] distributed lock 'buy_201108.data8/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:19 [conn] ChunkManager: time to load chunks for buy_201108.data8: 0ms sequenceNumber: 81 version: 1|2||4fd9779f2cb587101f2e7a3f based on: 1|0||4fd9779f2cb587101f2e7a3f
m30999| Thu Jun 14 01:33:19 [conn] CMD: movechunk: { movechunk: "buy_201108.data8", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:19 [conn] moving chunk ns: buy_201108.data8 moving ( ns:buy_201108.data8 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:19 [conn6] received moveChunk request: { moveChunk: "buy_201108.data8", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data8-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:19 [conn6] created new distributed lock for buy_201108.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:19 [conn6] distributed lock 'buy_201108.data8/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd9779f488b0031418cbf0f
m30000| Thu Jun 14 01:33:19 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:19-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339651999536), what: "moveChunk.start", ns: "buy_201108.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:19 [conn6] moveChunk request accepted at version 1|2||4fd9779f2cb587101f2e7a3f
m30000| Thu Jun 14 01:33:19 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:19 [migrateThread] build index buy_201108.data8 { _id: 1 }
m30001| Thu Jun 14 01:33:19 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:19 [migrateThread] info: creating collection buy_201108.data8 on add index
m30001| Thu Jun 14 01:33:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data8' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:20 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data8", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:20 [conn6] moveChunk setting version to: 2|0||4fd9779f2cb587101f2e7a3f
m30001| Thu Jun 14 01:33:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data8' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:20 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:20-80", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652000545), what: "moveChunk.to", ns: "buy_201108.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 995 } }
m30000| Thu Jun 14 01:33:20 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data8", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:20 [conn6] moveChunk updating self version to: 2|1||4fd9779f2cb587101f2e7a3f through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data8'
m30000| Thu Jun 14 01:33:20 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:20-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652000549), what: "moveChunk.commit", ns: "buy_201108.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:20 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:20 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:20 [conn6] distributed lock 'buy_201108.data8/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:20 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:20-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652000559), what: "moveChunk.from", ns: "buy_201108.data8", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:20 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data8", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data8-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:26175 w:69296 reslen:37 1023ms
m30999| Thu Jun 14 01:33:20 [conn] ChunkManager: time to load chunks for buy_201108.data8: 0ms sequenceNumber: 82 version: 2|1||4fd9779f2cb587101f2e7a3f based on: 1|2||4fd9779f2cb587101f2e7a3f
{ "millis" : 1024, "ok" : 1 }
m30999| Thu Jun 14 01:33:20 [conn] CMD: shardcollection: { shardcollection: "buy.data9", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:20 [conn] enable sharding on: buy.data9 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:20 [conn] going to create 1 chunk(s) for: buy.data9 using new epoch 4fd977a02cb587101f2e7a40
m30999| Thu Jun 14 01:33:20 [conn] ChunkManager: time to load chunks for buy.data9: 0ms sequenceNumber: 83 version: 1|0||4fd977a02cb587101f2e7a40 based on: (empty)
m30001| Thu Jun 14 01:33:20 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:20 [conn] splitting: buy.data9 shard: ns:buy.data9 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:20 [conn4] received splitChunk request: { splitChunk: "buy.data9", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy.data9-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:20 [conn4] created new distributed lock for buy.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:20 [conn4] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977a05a33af45ea50d94e
m30001| Thu Jun 14 01:33:20 [conn4] splitChunk accepted at version 1|0||4fd977a02cb587101f2e7a40
m30001| Thu Jun 14 01:33:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:20-81", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652000566), what: "split", ns: "buy.data9", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977a02cb587101f2e7a40') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977a02cb587101f2e7a40') } } }
m30001| Thu Jun 14 01:33:20 [conn4] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:20 [conn] ChunkManager: time to load chunks for buy.data9: 0ms sequenceNumber: 84 version: 1|2||4fd977a02cb587101f2e7a40 based on: 1|0||4fd977a02cb587101f2e7a40
m30999| Thu Jun 14 01:33:20 [conn] CMD: movechunk: { movechunk: "buy.data9", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:20 [conn] moving chunk ns: buy.data9 moving ( ns:buy.data9 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:20 [conn4] received moveChunk request: { moveChunk: "buy.data9", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data9-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:20 [conn4] created new distributed lock for buy.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:20 [conn4] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977a05a33af45ea50d94f
m30001| Thu Jun 14 01:33:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:20-82", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652000569), what: "moveChunk.start", ns: "buy.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:20 [conn4] moveChunk request accepted at version 1|2||4fd977a02cb587101f2e7a40
m30001| Thu Jun 14 01:33:20 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:20 [migrateThread] build index buy.data9 { _id: 1 }
m30000| Thu Jun 14 01:33:20 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:20 [migrateThread] info: creating collection buy.data9 on add index
m30000| Thu Jun 14 01:33:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data9' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk data transfer progress: { active: true, ns: "buy.data9", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk setting version to: 2|0||4fd977a02cb587101f2e7a40
m30000| Thu Jun 14 01:33:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy.data9' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:21 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:21-54", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652001577), what: "moveChunk.to", ns: "buy.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy.data9", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk updating self version to: 2|1||4fd977a02cb587101f2e7a40 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy.data9'
m30001| Thu Jun 14 01:33:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:21-83", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652001581), what: "moveChunk.commit", ns: "buy.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:21 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:21 [conn4] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:21-84", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652001590), what: "moveChunk.from", ns: "buy.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 8 } }
m30001| Thu Jun 14 01:33:21 [conn4] command admin.$cmd command: { moveChunk: "buy.data9", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy.data9-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:12606 w:147065 reslen:37 1022ms
m30999| Thu Jun 14 01:33:21 [conn] ChunkManager: time to load chunks for buy.data9: 0ms sequenceNumber: 85 version: 2|1||4fd977a02cb587101f2e7a40 based on: 1|2||4fd977a02cb587101f2e7a40
{ "millis" : 1023, "ok" : 1 }
m30999| Thu Jun 14 01:33:21 [conn] CMD: shardcollection: { shardcollection: "buy_201107.data9", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:21 [conn] enable sharding on: buy_201107.data9 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:21 [conn] going to create 1 chunk(s) for: buy_201107.data9 using new epoch 4fd977a12cb587101f2e7a41
m30999| Thu Jun 14 01:33:21 [conn] ChunkManager: time to load chunks for buy_201107.data9: 0ms sequenceNumber: 86 version: 1|0||4fd977a12cb587101f2e7a41 based on: (empty)
m30001| Thu Jun 14 01:33:21 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:21 [conn] splitting: buy_201107.data9 shard: ns:buy_201107.data9 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:33:21 [conn4] received splitChunk request: { splitChunk: "buy_201107.data9", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201107.data9-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:21 [conn4] created new distributed lock for buy_201107.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:21 [conn4] distributed lock 'buy_201107.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977a15a33af45ea50d950
m30001| Thu Jun 14 01:33:21 [conn4] splitChunk accepted at version 1|0||4fd977a12cb587101f2e7a41
m30001| Thu Jun 14 01:33:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:21-85", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652001598), what: "split", ns: "buy_201107.data9", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977a12cb587101f2e7a41') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977a12cb587101f2e7a41') } } }
m30001| Thu Jun 14 01:33:21 [conn4] distributed lock 'buy_201107.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30999| Thu Jun 14 01:33:21 [conn] ChunkManager: time to load chunks for buy_201107.data9: 0ms sequenceNumber: 87 version: 1|2||4fd977a12cb587101f2e7a41 based on: 1|0||4fd977a12cb587101f2e7a41
m30999| Thu Jun 14 01:33:21 [conn] CMD: movechunk: { movechunk: "buy_201107.data9", find: { _id: 1.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:21 [conn] moving chunk ns: buy_201107.data9 moving ( ns:buy_201107.data9 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:21 [conn4] received moveChunk request: { moveChunk: "buy_201107.data9", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data9-_id_1.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:21 [conn4] created new distributed lock for buy_201107.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:21 [conn4] distributed lock 'buy_201107.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' acquired, ts : 4fd977a15a33af45ea50d951
m30001| Thu Jun 14 01:33:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:21-86", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652001601), what: "moveChunk.start", ns: "buy_201107.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk request accepted at version 1|2||4fd977a12cb587101f2e7a41
m30001| Thu Jun 14 01:33:21 [conn4] moveChunk number of documents: 300
m30000| Thu Jun 14 01:33:21 [migrateThread] build index buy_201107.data9 { _id: 1 }
m30000| Thu Jun 14 01:33:21 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:21 [migrateThread] info: creating collection buy_201107.data9 on add index
m30000| Thu Jun 14 01:33:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data9' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:22 [conn4] moveChunk data transfer progress: { active: true, ns: "buy_201107.data9", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:22 [conn4] moveChunk setting version to: 2|0||4fd977a12cb587101f2e7a41
m30000| Thu Jun 14 01:33:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201107.data9' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:22 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:22-55", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652002609), what: "moveChunk.to", ns: "buy_201107.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:33:22 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201107.data9", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:22 [conn4] moveChunk updating self version to: 2|1||4fd977a12cb587101f2e7a41 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201107.data9'
m30001| Thu Jun 14 01:33:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:22-87", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652002617), what: "moveChunk.commit", ns: "buy_201107.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:22 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:22 [conn4] moveChunk deleted: 300
m30001| Thu Jun 14 01:33:22 [conn4] distributed lock 'buy_201107.data9/domU-12-31-39-01-70-B4:30001:1339651971:1856192163' unlocked.
m30001| Thu Jun 14 01:33:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:22-88", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42676", time: new Date(1339652002627), what: "moveChunk.from", ns: "buy_201107.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 9 } }
m30001| Thu Jun 14 01:33:22 [conn4] command admin.$cmd command: { moveChunk: "buy_201107.data9", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201107.data9-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:13280 w:154686 reslen:37 1026ms
m30999| Thu Jun 14 01:33:22 [conn] ChunkManager: time to load chunks for buy_201107.data9: 0ms sequenceNumber: 88 version: 2|1||4fd977a12cb587101f2e7a41 based on: 1|2||4fd977a12cb587101f2e7a41
{ "millis" : 1027, "ok" : 1 }
m30999| Thu Jun 14 01:33:22 [conn] CMD: shardcollection: { shardcollection: "buy_201108.data9", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:22 [conn] enable sharding on: buy_201108.data9 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:22 [conn] going to create 1 chunk(s) for: buy_201108.data9 using new epoch 4fd977a22cb587101f2e7a42
m30999| Thu Jun 14 01:33:22 [conn] ChunkManager: time to load chunks for buy_201108.data9: 0ms sequenceNumber: 89 version: 1|0||4fd977a22cb587101f2e7a42 based on: (empty)
m30000| Thu Jun 14 01:33:22 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:22 [conn] splitting: buy_201108.data9 shard: ns:buy_201108.data9 at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:33:22 [conn6] received splitChunk request: { splitChunk: "buy_201108.data9", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "buy_201108.data9-_id_MinKey", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:22 [conn6] created new distributed lock for buy_201108.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:22 [conn6] distributed lock 'buy_201108.data9/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd977a2488b0031418cbf10
m30000| Thu Jun 14 01:33:22 [conn6] splitChunk accepted at version 1|0||4fd977a22cb587101f2e7a42
m30000| Thu Jun 14 01:33:22 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:22-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652002633), what: "split", ns: "buy_201108.data9", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977a22cb587101f2e7a42') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977a22cb587101f2e7a42') } } }
m30000| Thu Jun 14 01:33:22 [conn6] distributed lock 'buy_201108.data9/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30999| Thu Jun 14 01:33:22 [conn] ChunkManager: time to load chunks for buy_201108.data9: 0ms sequenceNumber: 90 version: 1|2||4fd977a22cb587101f2e7a42 based on: 1|0||4fd977a22cb587101f2e7a42
m30999| Thu Jun 14 01:33:22 [conn] CMD: movechunk: { movechunk: "buy_201108.data9", find: { _id: 1.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:22 [conn] moving chunk ns: buy_201108.data9 moving ( ns:buy_201108.data9 at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:22 [conn6] received moveChunk request: { moveChunk: "buy_201108.data9", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data9-_id_1.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:22 [conn6] created new distributed lock for buy_201108.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:22 [conn6] distributed lock 'buy_201108.data9/domU-12-31-39-01-70-B4:30000:1339651974:685582305' acquired, ts : 4fd977a2488b0031418cbf11
m30000| Thu Jun 14 01:33:22 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:22-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652002636), what: "moveChunk.start", ns: "buy_201108.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:22 [conn6] moveChunk request accepted at version 1|2||4fd977a22cb587101f2e7a42
m30000| Thu Jun 14 01:33:22 [conn6] moveChunk number of documents: 300
m30001| Thu Jun 14 01:33:22 [migrateThread] build index buy_201108.data9 { _id: 1 }
m30001| Thu Jun 14 01:33:22 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:22 [migrateThread] info: creating collection buy_201108.data9 on add index
m30001| Thu Jun 14 01:33:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data9' { _id: 1.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:33:23 [conn6] moveChunk data transfer progress: { active: true, ns: "buy_201108.data9", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:23 [conn6] moveChunk setting version to: 2|0||4fd977a22cb587101f2e7a42
m30001| Thu Jun 14 01:33:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'buy_201108.data9' { _id: 1.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:33:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-89", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652003649), what: "moveChunk.to", ns: "buy_201108.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 999 } }
m30000| Thu Jun 14 01:33:23 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "buy_201108.data9", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 300, clonedBytes: 5400, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:23 [conn6] moveChunk updating self version to: 2|1||4fd977a22cb587101f2e7a42 through { _id: MinKey } -> { _id: 1.0 } for collection 'buy_201108.data9'
m30000| Thu Jun 14 01:33:23 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652003653), what: "moveChunk.commit", ns: "buy_201108.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:23 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:23 [conn6] moveChunk deleted: 300
m30000| Thu Jun 14 01:33:23 [conn6] distributed lock 'buy_201108.data9/domU-12-31-39-01-70-B4:30000:1339651974:685582305' unlocked.
m30000| Thu Jun 14 01:33:23 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51264", time: new Date(1339652003663), what: "moveChunk.from", ns: "buy_201108.data9", details: { min: { _id: 1.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 9 } }
m30000| Thu Jun 14 01:33:23 [conn6] command admin.$cmd command: { moveChunk: "buy_201108.data9", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "buy_201108.data9-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:28910 w:76739 reslen:37 1027ms
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy_201108.data9: 0ms sequenceNumber: 91 version: 2|1||4fd977a22cb587101f2e7a42 based on: 1|2||4fd977a22cb587101f2e7a42
{ "millis" : 1028, "ok" : 1 }
3: drop the non-suffixed db
m30999| Thu Jun 14 01:33:23 [conn] DROP DATABASE: buy
m30999| Thu Jun 14 01:33:23 [conn] erased database buy from registry
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data0: 0ms sequenceNumber: 92 version: 2|1||4fd977832cb587101f2e7a25 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data1: 0ms sequenceNumber: 93 version: 2|1||4fd977872cb587101f2e7a28 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data2: 0ms sequenceNumber: 94 version: 2|1||4fd9778a2cb587101f2e7a2b based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data3: 0ms sequenceNumber: 95 version: 2|1||4fd9778d2cb587101f2e7a2e based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data4: 0ms sequenceNumber: 96 version: 2|1||4fd977912cb587101f2e7a31 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data5: 0ms sequenceNumber: 97 version: 2|1||4fd977942cb587101f2e7a34 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data6: 0ms sequenceNumber: 98 version: 2|1||4fd977972cb587101f2e7a37 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data7: 0ms sequenceNumber: 99 version: 2|1||4fd9779a2cb587101f2e7a3a based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data8: 0ms sequenceNumber: 100 version: 2|1||4fd9779d2cb587101f2e7a3d based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data9: 0ms sequenceNumber: 101 version: 2|1||4fd977a02cb587101f2e7a40 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data0: 0ms sequenceNumber: 102 version: 2|1||4fd977832cb587101f2e7a25 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data1: 0ms sequenceNumber: 103 version: 2|1||4fd977872cb587101f2e7a28 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data2: 0ms sequenceNumber: 104 version: 2|1||4fd9778a2cb587101f2e7a2b based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data3: 0ms sequenceNumber: 105 version: 2|1||4fd9778d2cb587101f2e7a2e based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data4: 0ms sequenceNumber: 106 version: 2|1||4fd977912cb587101f2e7a31 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data5: 0ms sequenceNumber: 107 version: 2|1||4fd977942cb587101f2e7a34 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data6: 0ms sequenceNumber: 108 version: 2|1||4fd977972cb587101f2e7a37 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data7: 0ms sequenceNumber: 109 version: 2|1||4fd9779a2cb587101f2e7a3a based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data8: 0ms sequenceNumber: 110 version: 2|1||4fd9779d2cb587101f2e7a3d based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] ChunkManager: time to load chunks for buy.data9: 0ms sequenceNumber: 111 version: 2|1||4fd977a02cb587101f2e7a40 based on: (empty)
m30999| Thu Jun 14 01:33:23 [conn] DBConfig::dropDatabase: buy
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-0", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003671), what: "dropDatabase.start", ns: "buy", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-1", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003671), what: "dropCollection.start", ns: "buy.data0", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data0 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a43
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data0
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data0
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data0
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-2", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003674), what: "dropCollection", ns: "buy.data0", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data0/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-3", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003675), what: "dropCollection.start", ns: "buy.data1", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a44
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data1
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data1
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data1
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-4", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003677), what: "dropCollection", ns: "buy.data1", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data1/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-5", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003678), what: "dropCollection.start", ns: "buy.data2", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a45
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data2
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data2
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data2
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-6", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003680), what: "dropCollection", ns: "buy.data2", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data2/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-7", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003680), what: "dropCollection.start", ns: "buy.data3", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data3 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a46
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data3
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data3
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data3
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-8", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003683), what: "dropCollection", ns: "buy.data3", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data3/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-9", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003683), what: "dropCollection.start", ns: "buy.data4", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a47
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data4
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data4
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data4
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-10", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003685), what: "dropCollection", ns: "buy.data4", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data4/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-11", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003686), what: "dropCollection.start", ns: "buy.data5", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data5 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a48
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data5
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data5
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data5
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-12", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003688), what: "dropCollection", ns: "buy.data5", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data5/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-13", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003688), what: "dropCollection.start", ns: "buy.data6", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a49
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data6
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data6
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data6
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-14", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003691), what: "dropCollection", ns: "buy.data6", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data6/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-15", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003691), what: "dropCollection.start", ns: "buy.data7", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data7 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a4a
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data7
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data7
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data7
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-16", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003693), what: "dropCollection", ns: "buy.data7", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data7/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-17", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003694), what: "dropCollection.start", ns: "buy.data8", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data8 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a4b
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data8
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data8
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data8
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-18", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003696), what: "dropCollection", ns: "buy.data8", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data8/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-19", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003696), what: "dropCollection.start", ns: "buy.data9", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] created new distributed lock for buy.data9 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' acquired, ts : 4fd977a32cb587101f2e7a4c
m30000| Thu Jun 14 01:33:23 [conn6] CMD: drop buy.data9
m30001| Thu Jun 14 01:33:23 [conn4] CMD: drop buy.data9
m30001| Thu Jun 14 01:33:23 [conn4] wiping data for: buy.data9
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-20", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003698), what: "dropCollection", ns: "buy.data9", details: {} }
m30999| Thu Jun 14 01:33:23 [conn] distributed lock 'buy.data9/domU-12-31-39-01-70-B4:30999:1339651967:1804289383' unlocked.
m30999| Thu Jun 14 01:33:23 [conn] DBConfig::dropDatabase: buy dropped sharded collections: 10
m30999| Thu Jun 14 01:33:23 [conn] DBConfig::dropDatabase: buy dropped sharded collections: 0
m30001| Thu Jun 14 01:33:23 [initandlisten] connection accepted from 127.0.0.1:42685 #6 (6 connections now open)
m30001| Thu Jun 14 01:33:23 [conn6] dropDatabase buy
m30000| Thu Jun 14 01:33:23 [conn4] dropDatabase buy
m30001| Thu Jun 14 01:33:23 [conn6] dropDatabase buy
m30999| Thu Jun 14 01:33:23 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:23-21", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652003737), what: "dropDatabase", ns: "buy", details: {} }
3: ensure only the non-suffixed db was dropped
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:23 [conn7] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:23 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:23 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:33:23 [conn3] end connection 127.0.0.1:51258 (13 connections now open)
m30000| Thu Jun 14 01:33:23 [conn4] end connection 127.0.0.1:51262 (12 connections now open)
m30000| Thu Jun 14 01:33:23 [conn6] end connection 127.0.0.1:51264 (11 connections now open)
m30001| Thu Jun 14 01:33:23 [conn3] end connection 127.0.0.1:42675 (5 connections now open)
m30000| Thu Jun 14 01:33:23 [conn7] end connection 127.0.0.1:51267 (10 connections now open)
m30001| Thu Jun 14 01:33:23 [conn4] end connection 127.0.0.1:42676 (4 connections now open)
m30000| Thu Jun 14 01:33:23 [conn12] end connection 127.0.0.1:51275 (9 connections now open)
m30000| Thu Jun 14 01:33:23 [conn14] end connection 127.0.0.1:51277 (8 connections now open)
m30001| Thu Jun 14 01:33:23 [conn6] end connection 127.0.0.1:42685 (3 connections now open)
Thu Jun 14 01:33:24 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:33:24 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:33:24 [interruptThread] now exiting
m30000| Thu Jun 14 01:33:24 dbexit:
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:33:24 [interruptThread] closing listening socket: 19
m30000| Thu Jun 14 01:33:24 [interruptThread] closing listening socket: 20
m30000| Thu Jun 14 01:33:24 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:33:24 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:33:24 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:33:24 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:33:24 dbexit: really exiting now
m30001| Thu Jun 14 01:33:24 [conn5] end connection 127.0.0.1:42680 (2 connections now open)
Thu Jun 14 01:33:25 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:33:25 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:33:25 [interruptThread] now exiting
m30001| Thu Jun 14 01:33:25 dbexit:
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:33:25 [interruptThread] closing listening socket: 22
m30001| Thu Jun 14 01:33:25 [interruptThread] closing listening socket: 23
m30001| Thu Jun 14 01:33:25 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:33:25 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:33:25 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:25 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:33:25 dbexit: really exiting now
Thu Jun 14 01:33:26 shell: stopped mongo program on port 30001
*** ShardingTest drop_sharded_db completed successfully in 40.099 seconds ***
40164.381981ms
Thu Jun 14 01:33:26 [initandlisten] connection accepted from 127.0.0.1:54782 #24 (11 connections now open)
*******************************************
Test : error1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/error1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/error1.js";TestData.testFile = "error1.js";TestData.testName = "error1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:33:26 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/error10'
Thu Jun 14 01:33:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/error10
m30000| Thu Jun 14 01:33:27
m30000| Thu Jun 14 01:33:27 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:33:27
m30000| Thu Jun 14 01:33:27 [initandlisten] MongoDB starting : pid=24355 port=30000 dbpath=/data/db/error10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:33:27 [initandlisten]
m30000| Thu Jun 14 01:33:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:33:27 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:33:27 [initandlisten]
m30000| Thu Jun 14 01:33:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:33:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:33:27 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:33:27 [initandlisten]
m30000| Thu Jun 14 01:33:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:33:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:33:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:33:27 [initandlisten] options: { dbpath: "/data/db/error10", port: 30000 }
m30000| Thu Jun 14 01:33:27 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:33:27 [initandlisten] waiting for connections on port 30000
Resetting db path '/data/db/error11'
m30000| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:51281 #1 (1 connection now open)
Thu Jun 14 01:33:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/error11
m30001| Thu Jun 14 01:33:27
m30001| Thu Jun 14 01:33:27 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:33:27
m30001| Thu Jun 14 01:33:27 [initandlisten] MongoDB starting : pid=24368 port=30001 dbpath=/data/db/error11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:33:27 [initandlisten]
m30001| Thu Jun 14 01:33:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:33:27 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:33:27 [initandlisten]
m30001| Thu Jun 14 01:33:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:33:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:33:27 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:33:27 [initandlisten]
m30001| Thu Jun 14 01:33:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:33:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:33:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:33:27 [initandlisten] options: { dbpath: "/data/db/error11", port: 30001 }
m30001| Thu Jun 14 01:33:27 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:33:27 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30000| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:51284 #2 (2 connections now open)
ShardingTest error1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:33:27 [FileAllocator] allocating new datafile /data/db/error10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:27 [FileAllocator] creating directory /data/db/error10/_tmp
Thu Jun 14 01:33:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30001| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:42690 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:27 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:33:27 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24382 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:33:27 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:33:27 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:33:27 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:33:27 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:33:27 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:51286 #3 (3 connections now open)
m30999| Thu Jun 14 01:33:27 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:27 [FileAllocator] done allocating datafile /data/db/error10/config.ns, size: 16MB, took 0.255 secs
m30000| Thu Jun 14 01:33:27 [FileAllocator] allocating new datafile /data/db/error10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:33:27 [FileAllocator] done allocating datafile /data/db/error10/config.0, size: 16MB, took 0.269 secs
m30000| Thu Jun 14 01:33:27 [FileAllocator] allocating new datafile /data/db/error10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:33:27 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn2] insert config.settings keyUpdates:0 locks(micros) w:545980 545ms
m30999| Thu Jun 14 01:33:27 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:51289 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:27 [mongosMain] connected connection!
m30999| Thu Jun 14 01:33:27 [mongosMain] MaxChunkSize: 50
m30000| Thu Jun 14 01:33:27 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:33:27 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:33:27 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:27 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:27 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:33:27 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:27 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:33:27 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:33:27 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:33:27 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:33:27
m30999| Thu Jun 14 01:33:27 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:27 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:27 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:33:27 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:27 [initandlisten] connection accepted from 127.0.0.1:51290 #5 (5 connections now open)
m30999| Thu Jun 14 01:33:27 [Balancer] connected connection!
m30999| Thu Jun 14 01:33:27 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:33:27 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652007:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:33:27 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:27 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:33:27 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:27 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:33:27 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652007:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:33:27 [Balancer] inserting initial doc in config.locks for lock balancer
m30000| Thu Jun 14 01:33:27 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:33:27 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:27 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652007:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652007:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652007:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:33:27 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977a7f278c2535db761d4" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:33:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652007:1804289383' acquired, ts : 4fd977a7f278c2535db761d4
m30999| Thu Jun 14 01:33:28 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:33:28 [Balancer] no collections to balance
m30999| Thu Jun 14 01:33:28 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:33:28 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:33:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652007:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:33:28 [mongosMain] connection accepted from 127.0.0.1:43270 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:28 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:33:28 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:33:28 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:28 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:33:28 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:33:28 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:28 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:28 [conn] connected connection!
m30001| Thu Jun 14 01:33:28 [initandlisten] connection accepted from 127.0.0.1:42699 #2 (2 connections now open)
m30999| Thu Jun 14 01:33:28 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:33:28 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:33:28 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:33:28 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:33:28 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:33:28 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:28 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:28 [initandlisten] connection accepted from 127.0.0.1:51293 #6 (6 connections now open)
m30999| Thu Jun 14 01:33:28 [conn] connected connection!
m30999| Thu Jun 14 01:33:28 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977a7f278c2535db761d3
m30999| Thu Jun 14 01:33:28 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:33:28 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:33:28 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:28 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:28 [conn] connected connection!
m30999| Thu Jun 14 01:33:28 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977a7f278c2535db761d3
m30001| Thu Jun 14 01:33:28 [initandlisten] connection accepted from 127.0.0.1:42701 #3 (3 connections now open)
m30999| Thu Jun 14 01:33:28 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:33:28 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:33:28 [FileAllocator] allocating new datafile /data/db/error11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:33:28 [FileAllocator] creating directory /data/db/error11/_tmp
m30000| Thu Jun 14 01:33:28 [FileAllocator] done allocating datafile /data/db/error10/config.1, size: 32MB, took 0.534 secs
m30001| Thu Jun 14 01:33:28 [FileAllocator] done allocating datafile /data/db/error11/test.ns, size: 16MB, took 0.309 secs
m30001| Thu Jun 14 01:33:28 [FileAllocator] allocating new datafile /data/db/error11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:29 [FileAllocator] done allocating datafile /data/db/error11/test.0, size: 16MB, took 0.307 secs
m30001| Thu Jun 14 01:33:29 [conn3] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:33:29 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:29 [conn3] insert test.foo keyUpdates:0 locks(micros) W:115 w:1047044 1046ms
m30001| Thu Jun 14 01:33:29 [FileAllocator] allocating new datafile /data/db/error11/test.1, filling with zeroes...
m30999| Thu Jun 14 01:33:29 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:29 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:29 [conn] connected connection!
m30001| Thu Jun 14 01:33:29 [initandlisten] connection accepted from 127.0.0.1:42702 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:29 [conn] CMD: shardcollection: { shardcollection: "test.foo2", key: { num: 1.0 } }
m30999| Thu Jun 14 01:33:29 [conn] enable sharding on: test.foo2 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:29 [conn] going to create 1 chunk(s) for: test.foo2 using new epoch 4fd977a9f278c2535db761d5
m30999| Thu Jun 14 01:33:29 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 2 version: 1|0||4fd977a9f278c2535db761d5 based on: (empty)
m30999| Thu Jun 14 01:33:29 [conn] resetting shard version of test.foo2 on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion shard0000 localhost:30000 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0000", shardHost: "localhost:30000" } 0x9de0f28
m30001| Thu Jun 14 01:33:29 [conn4] build index test.foo2 { _id: 1 }
m30001| Thu Jun 14 01:33:29 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:29 [conn4] info: creating collection test.foo2 on add index
m30001| Thu Jun 14 01:33:29 [conn4] build index test.foo2 { num: 1.0 }
m30001| Thu Jun 14 01:33:29 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:29 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:33:29 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977a9f278c2535db761d5'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0001", shardHost: "localhost:30001" } 0x9de3178
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo2", need_authoritative: true, errmsg: "first time for collection 'test.foo2'", ok: 0.0 }
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977a9f278c2535db761d5'), serverID: ObjectId('4fd977a7f278c2535db761d3'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9de3178
m30001| Thu Jun 14 01:33:29 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:29 [initandlisten] connection accepted from 127.0.0.1:51296 #7 (7 connections now open)
m30999| Thu Jun 14 01:33:29 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:29 [conn] about to initiate autosplit: ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 7396500 splitThreshold: 921
m30999| Thu Jun 14 01:33:29 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:29 [conn] about to initiate autosplit: ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 201 splitThreshold: 921
m30999| Thu Jun 14 01:33:29 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:29 [conn] splitting: test.foo2 shard: ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey }
m30000| Thu Jun 14 01:33:29 [initandlisten] connection accepted from 127.0.0.1:51297 #8 (8 connections now open)
m30001| Thu Jun 14 01:33:29 [conn4] received splitChunk request: { splitChunk: "test.foo2", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 10.0 } ], shardId: "test.foo2-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:29 [conn4] created new distributed lock for test.foo2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:29 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652009:1823513092 (sleeping for 30000ms)
m30001| Thu Jun 14 01:33:29 [conn4] distributed lock 'test.foo2/domU-12-31-39-01-70-B4:30001:1339652009:1823513092' acquired, ts : 4fd977a92279cddea9cf5d23
m30001| Thu Jun 14 01:33:29 [conn4] splitChunk accepted at version 1|0||4fd977a9f278c2535db761d5
m30001| Thu Jun 14 01:33:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:29-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42702", time: new Date(1339652009150), what: "split", ns: "test.foo2", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 10.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977a9f278c2535db761d5') }, right: { min: { num: 10.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977a9f278c2535db761d5') } } }
m30001| Thu Jun 14 01:33:29 [conn4] distributed lock 'test.foo2/domU-12-31-39-01-70-B4:30001:1339652009:1823513092' unlocked.
m30999| Thu Jun 14 01:33:29 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 3 version: 1|2||4fd977a9f278c2535db761d5 based on: 1|0||4fd977a9f278c2535db761d5
m30999| Thu Jun 14 01:33:29 [conn] CMD: movechunk: { movechunk: "test.foo2", find: { num: 20.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:29 [conn] moving chunk ns: test.foo2 moving ( ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:29 [conn4] received moveChunk request: { moveChunk: "test.foo2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo2-num_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:29 [conn4] created new distributed lock for test.foo2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:29 [conn4] distributed lock 'test.foo2/domU-12-31-39-01-70-B4:30001:1339652009:1823513092' acquired, ts : 4fd977a92279cddea9cf5d24
m30001| Thu Jun 14 01:33:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:29-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42702", time: new Date(1339652009154), what: "moveChunk.start", ns: "test.foo2", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:29 [conn4] moveChunk request accepted at version 1|2||4fd977a9f278c2535db761d5
m30001| Thu Jun 14 01:33:29 [conn4] moveChunk number of documents: 3
m30001| Thu Jun 14 01:33:29 [initandlisten] connection accepted from 127.0.0.1:42705 #5 (5 connections now open)
m30000| Thu Jun 14 01:33:29 [FileAllocator] allocating new datafile /data/db/error10/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:30 [FileAllocator] done allocating datafile /data/db/error10/test.ns, size: 16MB, took 0.856 secs
m30000| Thu Jun 14 01:33:30 [FileAllocator] allocating new datafile /data/db/error10/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:30 [FileAllocator] done allocating datafile /data/db/error11/test.1, size: 32MB, took 0.893 secs
m30001| Thu Jun 14 01:33:30 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo2", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:30 [FileAllocator] done allocating datafile /data/db/error10/test.0, size: 16MB, took 0.251 secs
m30000| Thu Jun 14 01:33:30 [migrateThread] build index test.foo2 { _id: 1 }
m30000| Thu Jun 14 01:33:30 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:30 [migrateThread] info: creating collection test.foo2 on add index
m30000| Thu Jun 14 01:33:30 [migrateThread] build index test.foo2 { num: 1.0 }
m30000| Thu Jun 14 01:33:30 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:30 [FileAllocator] allocating new datafile /data/db/error10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:33:30 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo2' { num: 10.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:33:30 [FileAllocator] done allocating datafile /data/db/error10/test.1, size: 32MB, took 0.599 secs
m30001| Thu Jun 14 01:33:31 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo2", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 93, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:31 [conn4] moveChunk setting version to: 2|0||4fd977a9f278c2535db761d5
m30000| Thu Jun 14 01:33:31 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo2' { num: 10.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:33:31 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:31-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652011165), what: "moveChunk.to", ns: "test.foo2", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 5: 1129, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 862 } }
m30000| Thu Jun 14 01:33:31 [initandlisten] connection accepted from 127.0.0.1:51299 #9 (9 connections now open)
m30001| Thu Jun 14 01:33:31 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo2", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 93, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:31 [conn4] moveChunk updating self version to: 2|1||4fd977a9f278c2535db761d5 through { num: MinKey } -> { num: 10.0 } for collection 'test.foo2'
m30001| Thu Jun 14 01:33:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:31-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42702", time: new Date(1339652011170), what: "moveChunk.commit", ns: "test.foo2", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:31 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:31 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:33:31 [conn4] distributed lock 'test.foo2/domU-12-31-39-01-70-B4:30001:1339652009:1823513092' unlocked.
m30001| Thu Jun 14 01:33:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:31-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42702", time: new Date(1339652011171), what: "moveChunk.from", ns: "test.foo2", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:31 [conn4] command admin.$cmd command: { moveChunk: "test.foo2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo2-num_10.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:531 w:1345 reslen:37 2018ms
m30999| Thu Jun 14 01:33:31 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 4 version: 2|1||4fd977a9f278c2535db761d5 based on: 1|2||4fd977a9f278c2535db761d5
a: 3
b: 1
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0000 localhost:30000 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977a9f278c2535db761d5'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0000", shardHost: "localhost:30000" } 0x9de0f28
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo2", need_authoritative: true, errmsg: "first time for collection 'test.foo2'", ok: 0.0 }
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0000 localhost:30000 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977a9f278c2535db761d5'), serverID: ObjectId('4fd977a7f278c2535db761d3'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9de0f28
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977a9f278c2535db761d5'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0001", shardHost: "localhost:30001" } 0x9de3178
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977a9f278c2535db761d5'), ok: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] about to initiate autosplit: ns:test.foo2 at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey } dataWritten: 8083709 splitThreshold: 471859
m30000| Thu Jun 14 01:33:31 [conn6] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:31 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:31 [conn] CMD: shardcollection: { shardcollection: "test.foo3", key: { num: 1.0 } }
m30999| Thu Jun 14 01:33:31 [conn] enable sharding on: test.foo3 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] going to create 1 chunk(s) for: test.foo3 using new epoch 4fd977abf278c2535db761d6
m30001| Thu Jun 14 01:33:31 [conn4] build index test.foo3 { _id: 1 }
m30001| Thu Jun 14 01:33:31 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:31 [conn4] info: creating collection test.foo3 on add index
m30001| Thu Jun 14 01:33:31 [conn4] build index test.foo3 { num: 1.0 }
m30001| Thu Jun 14 01:33:31 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:31 [conn] ChunkManager: time to load chunks for test.foo3: 0ms sequenceNumber: 5 version: 1|0||4fd977abf278c2535db761d6 based on: (empty)
m30999| Thu Jun 14 01:33:31 [conn] resetting shard version of test.foo3 on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0000 localhost:30000 test.foo3 { setShardVersion: "test.foo3", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0000", shardHost: "localhost:30000" } 0x9de0f28
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0001 localhost:30001 test.foo3 { setShardVersion: "test.foo3", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977abf278c2535db761d6'), serverID: ObjectId('4fd977a7f278c2535db761d3'), shard: "shard0001", shardHost: "localhost:30001" } 0x9de3178
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo3", need_authoritative: true, errmsg: "first time for collection 'test.foo3'", ok: 0.0 }
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion shard0001 localhost:30001 test.foo3 { setShardVersion: "test.foo3", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977abf278c2535db761d6'), serverID: ObjectId('4fd977a7f278c2535db761d3'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9de3178
m30001| Thu Jun 14 01:33:31 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:31 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:31 [conn] warning: shard key mismatch for insert { _id: ObjectId('4fd977ab1e1234aa796a98c0') }, expected values for { num: 1.0 }, reloading config data to ensure not stale
m30999| Thu Jun 14 01:33:32 [conn] tried to insert object with no valid shard key for { num: 1.0 } : { _id: ObjectId('4fd977ab1e1234aa796a98c0') }
m30999| Thu Jun 14 01:33:32 [conn] User Assertion: 8011:tried to insert object with no valid shard key for { num: 1.0 } : { _id: ObjectId('4fd977ab1e1234aa796a98c0') }
m30999| Thu Jun 14 01:33:32 [conn] AssertionException while processing op type : 2002 to : test.foo3 :: caused by :: 8011 tried to insert object with no valid shard key for { num: 1.0 } : { _id: ObjectId('4fd977ab1e1234aa796a98c0') }
m30999| Thu Jun 14 01:33:32 [conn] about to initiate autosplit: ns:test.foo3 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 174598 splitThreshold: 921
m30999| Thu Jun 14 01:33:32 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:32 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:33:32 [conn3] end connection 127.0.0.1:51286 (8 connections now open)
m30000| Thu Jun 14 01:33:32 [conn5] end connection 127.0.0.1:51290 (7 connections now open)
m30000| Thu Jun 14 01:33:32 [conn6] end connection 127.0.0.1:51293 (6 connections now open)
m30001| Thu Jun 14 01:33:32 [conn4] end connection 127.0.0.1:42702 (4 connections now open)
m30001| Thu Jun 14 01:33:32 [conn3] end connection 127.0.0.1:42701 (3 connections now open)
Thu Jun 14 01:33:33 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:33:33 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:33:33 [interruptThread] now exiting
m30000| Thu Jun 14 01:33:33 dbexit:
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:33:33 [interruptThread] closing listening socket: 20
m30000| Thu Jun 14 01:33:33 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:33:33 [interruptThread] closing listening socket: 22
m30000| Thu Jun 14 01:33:33 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:33:33 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:33 [conn5] end connection 127.0.0.1:42705 (2 connections now open)
m30000| Thu Jun 14 01:33:33 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:33:33 [conn9] end connection 127.0.0.1:51299 (5 connections now open)
m30000| Thu Jun 14 01:33:33 dbexit: really exiting now
Thu Jun 14 01:33:34 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:33:34 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:33:34 [interruptThread] now exiting
m30001| Thu Jun 14 01:33:34 dbexit:
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:33:34 [interruptThread] closing listening socket: 23
m30001| Thu Jun 14 01:33:34 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:33:34 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:33:34 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:33:34 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:34 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:33:34 dbexit: really exiting now
Thu Jun 14 01:33:35 shell: stopped mongo program on port 30001
*** ShardingTest error1 completed successfully in 8.235 seconds ***
8289.546013ms
Thu Jun 14 01:33:35 [initandlisten] connection accepted from 127.0.0.1:54803 #25 (12 connections now open)
*******************************************
Test : features1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/features1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/features1.js";TestData.testFile = "features1.js";TestData.testName = "features1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:33:35 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/features10'
Thu Jun 14 01:33:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/features10
m30000| Thu Jun 14 01:33:35
m30000| Thu Jun 14 01:33:35 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:33:35
m30000| Thu Jun 14 01:33:35 [initandlisten] MongoDB starting : pid=24426 port=30000 dbpath=/data/db/features10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:33:35 [initandlisten]
m30000| Thu Jun 14 01:33:35 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:33:35 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:33:35 [initandlisten]
m30000| Thu Jun 14 01:33:35 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:33:35 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:33:35 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:33:35 [initandlisten]
m30000| Thu Jun 14 01:33:35 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:33:35 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:33:35 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:33:35 [initandlisten] options: { dbpath: "/data/db/features10", port: 30000 }
m30000| Thu Jun 14 01:33:35 [websvr] admin web console waiting for connections on port 31000
m30000| Thu Jun 14 01:33:35 [initandlisten] waiting for connections on port 30000
Resetting db path '/data/db/features11'
m30000| Thu Jun 14 01:33:35 [initandlisten] connection accepted from 127.0.0.1:51302 #1 (1 connection now open)
Thu Jun 14 01:33:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/features11
m30001| Thu Jun 14 01:33:35
m30001| Thu Jun 14 01:33:35 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:33:35
m30001| Thu Jun 14 01:33:35 [initandlisten] MongoDB starting : pid=24439 port=30001 dbpath=/data/db/features11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:33:35 [initandlisten]
m30001| Thu Jun 14 01:33:35 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:33:35 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:33:35 [initandlisten]
m30001| Thu Jun 14 01:33:35 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:33:35 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:33:35 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:33:35 [initandlisten]
m30001| Thu Jun 14 01:33:35 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:33:35 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:33:35 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:33:35 [initandlisten] options: { dbpath: "/data/db/features11", port: 30001 }
m30001| Thu Jun 14 01:33:35 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:33:35 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
ShardingTest features1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:33:35 [initandlisten] connection accepted from 127.0.0.1:51305 #2 (2 connections now open)
m30000| Thu Jun 14 01:33:35 [FileAllocator] allocating new datafile /data/db/features10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:35 [FileAllocator] creating directory /data/db/features10/_tmp
Thu Jun 14 01:33:35 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30001| Thu Jun 14 01:33:35 [initandlisten] connection accepted from 127.0.0.1:42711 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:35 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:33:35 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24453 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:33:35 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:33:35 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:33:35 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:33:35 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:33:35 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:35 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:35 [initandlisten] connection accepted from 127.0.0.1:51307 #3 (3 connections now open)
m30999| Thu Jun 14 01:33:35 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:35 [FileAllocator] done allocating datafile /data/db/features10/config.ns, size: 16MB, took 0.233 secs
m30000| Thu Jun 14 01:33:35 [FileAllocator] allocating new datafile /data/db/features10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:33:36 [FileAllocator] done allocating datafile /data/db/features10/config.0, size: 16MB, took 0.252 secs
m30000| Thu Jun 14 01:33:36 [FileAllocator] allocating new datafile /data/db/features10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:33:36 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:36 [conn2] insert config.settings keyUpdates:0 locks(micros) w:507099 507ms
m30999| Thu Jun 14 01:33:36 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:36 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:51310 #4 (4 connections now open)
m30000| Thu Jun 14 01:33:36 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:33:36 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:36 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:51311 #5 (5 connections now open)
m30999| Thu Jun 14 01:33:36 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:33:36 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:36 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:33:36 [conn5] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:36 [conn5] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:33:36 [conn5] build index config.chunks { ns: 1, min: 1 }
m30999| Thu Jun 14 01:33:36 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:36 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:33:36 [Balancer] about to contact config servers and shards
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:33:36 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:36 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:36 [conn5] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:36 [conn5] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:33:36 [conn5] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:33:36 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:33:36
m30999| Thu Jun 14 01:33:36 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:36 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:51312 #6 (6 connections now open)
m30000| Thu Jun 14 01:33:36 [conn4] build index config.mongos { _id: 1 }
m30999| Thu Jun 14 01:33:36 [Balancer] connected connection!
m30000| Thu Jun 14 01:33:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:33:36 [Balancer] inserting initial doc in config.locks for lock balancer
m30000| Thu Jun 14 01:33:36 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652016:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652016:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652016:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:33:36 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977b00a24f327487567a1" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:33:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652016:1804289383' acquired, ts : 4fd977b00a24f327487567a1
m30999| Thu Jun 14 01:33:36 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:33:36 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652016:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:33:36 [Balancer] no collections to balance
m30999| Thu Jun 14 01:33:36 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:33:36 [Balancer] *** end of balancing round
m30000| Thu Jun 14 01:33:36 [conn5] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652016:1804289383' unlocked.
m30999| Thu Jun 14 01:33:36 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:33:36 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652016:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:33:36 [conn5] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:33:36 [mongosMain] connection accepted from 127.0.0.1:43292 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:36 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:33:36 [conn5] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:33:36 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:33:36 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:36 [conn] connected connection!
m30001| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:42721 #2 (2 connections now open)
m30999| Thu Jun 14 01:33:36 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:33:36 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:51315 #7 (7 connections now open)
m30999| Thu Jun 14 01:33:36 [conn] connected connection!
m30999| Thu Jun 14 01:33:36 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977b00a24f327487567a0
m30999| Thu Jun 14 01:33:36 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:33:36 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:36 [conn] connected connection!
m30001| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:42723 #3 (3 connections now open)
m30999| Thu Jun 14 01:33:36 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977b00a24f327487567a0
m30999| Thu Jun 14 01:33:36 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: WriteBackListener-localhost:30001
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:33:36 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:33:36 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:36 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:36 [conn] connected connection!
m30001| Thu Jun 14 01:33:36 [initandlisten] connection accepted from 127.0.0.1:42724 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:36 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:33:36 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:33:36 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:33:36 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { num: 1.0 } }
m30999| Thu Jun 14 01:33:36 [conn] enable sharding on: test.foo with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:36 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd977b00a24f327487567a2
m30001| Thu Jun 14 01:33:36 [FileAllocator] allocating new datafile /data/db/features11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:33:36 [FileAllocator] creating directory /data/db/features11/_tmp
m30999| Thu Jun 14 01:33:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd977b00a24f327487567a2 based on: (empty)
m30000| Thu Jun 14 01:33:36 [conn5] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:33:36 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:36 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:36 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:36 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30000| Thu Jun 14 01:33:36 [FileAllocator] done allocating datafile /data/db/features10/config.1, size: 32MB, took 0.588 secs
m30001| Thu Jun 14 01:33:37 [FileAllocator] done allocating datafile /data/db/features11/test.ns, size: 16MB, took 0.317 secs
m30001| Thu Jun 14 01:33:37 [FileAllocator] allocating new datafile /data/db/features11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:37 [FileAllocator] done allocating datafile /data/db/features11/test.0, size: 16MB, took 0.28 secs
m30001| Thu Jun 14 01:33:37 [FileAllocator] allocating new datafile /data/db/features11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:33:37 [conn4] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:33:37 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:37 [conn4] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:33:37 [conn4] build index test.foo { num: 1.0 }
m30001| Thu Jun 14 01:33:37 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:37 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:120 r:258 w:1077873 1077ms
m30001| Thu Jun 14 01:33:37 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 reslen:173 1075ms
m30999| Thu Jun 14 01:33:37 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:37 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:33:37 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:37 [initandlisten] connection accepted from 127.0.0.1:51318 #8 (8 connections now open)
m30999| Thu Jun 14 01:33:37 [conn] sharded index write for test.system.indexes
m30001| Thu Jun 14 01:33:37 [conn3] build index test.foo { y: 1.0 }
m30001| Thu Jun 14 01:33:37 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:37 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey }
m30000| Thu Jun 14 01:33:37 [initandlisten] connection accepted from 127.0.0.1:51319 #9 (9 connections now open)
m30001| Thu Jun 14 01:33:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 10.0 } ], shardId: "test.foo-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:37 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652017:512547722 (sleeping for 30000ms)
m30001| Thu Jun 14 01:33:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b145f5df931002d09e
m30001| Thu Jun 14 01:33:37 [conn4] splitChunk accepted at version 1|0||4fd977b00a24f327487567a2
m30001| Thu Jun 14 01:33:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:37-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652017436), what: "split", ns: "test.foo", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 10.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977b00a24f327487567a2') }, right: { min: { num: 10.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977b00a24f327487567a2') } } }
m30001| Thu Jun 14 01:33:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30999| Thu Jun 14 01:33:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd977b00a24f327487567a2 based on: 1|0||4fd977b00a24f327487567a2
m30999| Thu Jun 14 01:33:37 [conn] CMD: movechunk: { movechunk: "test.foo", find: { num: 20.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:37 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:37 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b145f5df931002d09f
m30001| Thu Jun 14 01:33:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:37-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652017440), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:37 [conn4] moveChunk request accepted at version 1|2||4fd977b00a24f327487567a2
m30001| Thu Jun 14 01:33:37 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:33:37 [initandlisten] connection accepted from 127.0.0.1:42727 #5 (5 connections now open)
m30000| Thu Jun 14 01:33:37 [FileAllocator] allocating new datafile /data/db/features10/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:38 [FileAllocator] done allocating datafile /data/db/features10/test.ns, size: 16MB, took 0.918 secs
m30001| Thu Jun 14 01:33:38 [FileAllocator] done allocating datafile /data/db/features11/test.1, size: 32MB, took 0.944 secs
m30000| Thu Jun 14 01:33:38 [FileAllocator] allocating new datafile /data/db/features10/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:38 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:38 [FileAllocator] done allocating datafile /data/db/features10/test.0, size: 16MB, took 0.275 secs
m30000| Thu Jun 14 01:33:38 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:33:38 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:38 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:33:38 [migrateThread] build index test.foo { num: 1.0 }
m30000| Thu Jun 14 01:33:38 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:38 [migrateThread] build index test.foo { y: 1.0 }
m30000| Thu Jun 14 01:33:38 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:38 [FileAllocator] allocating new datafile /data/db/features10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:33:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 10.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:33:39 [FileAllocator] done allocating datafile /data/db/features10/test.1, size: 32MB, took 0.547 secs
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk setting version to: 2|0||4fd977b00a24f327487567a2
m30000| Thu Jun 14 01:33:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 10.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:33:39 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:39-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652019458), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 5: 1205, step2 of 5: 0, step3 of 5: 14, step4 of 5: 0, step5 of 5: 795 } }
m30000| Thu Jun 14 01:33:39 [initandlisten] connection accepted from 127.0.0.1:51321 #10 (10 connections now open)
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk updating self version to: 2|1||4fd977b00a24f327487567a2 through { num: MinKey } -> { num: 10.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:33:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:39-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652019462), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:39 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:33:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30001| Thu Jun 14 01:33:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:39-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652019463), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2004, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:39 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_10.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:120 r:326 w:1077916 reslen:37 2023ms
m30999| Thu Jun 14 01:33:39 [conn] moveChunk result: { ok: 1.0 }
m30000| Thu Jun 14 01:33:39 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||4fd977b00a24f327487567a2 based on: 1|2||4fd977b00a24f327487567a2
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977b00a24f327487567a2'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { num: MinKey } max: { num: 10.0 } dataWritten: 8312782 splitThreshold: 471859
m30999| Thu Jun 14 01:33:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977b00a24f327487567a2'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey } dataWritten: 8083677 splitThreshold: 471859
m30999| Thu Jun 14 01:33:39 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:33:39 [conn7] build index test.foo { x: 1.0 }
m30000| Thu Jun 14 01:33:39 [conn7] build index done. scanned 1 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo { x: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
m30999| Thu Jun 14 01:33:39 [conn] User Assertion: 10205:can't use unique indexes with sharding ns:test.foo key: { z: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] AssertionException while processing op type : 2002 to : test.system.indexes :: caused by :: 10205 can't use unique indexes with sharding ns:test.foo key: { z: 1.0 }
m30000| Thu Jun 14 01:33:39 [conn7] build index test.foo { num: 1.0, bar: 1.0 }
m30000| Thu Jun 14 01:33:39 [conn7] build index done. scanned 1 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo { num: 1.0, bar: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo2 { _id: 1 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn3] info: creating collection test.foo2 on add index
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo2 { a: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo2",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"a" : 1
},
"ns" : "test.foo2",
"name" : "a_1"
}
]
m30999| Thu Jun 14 01:33:39 [conn] CMD: shardcollection: { shardcollection: "test.foo2", key: { num: 1.0 } }
m30999| Thu Jun 14 01:33:39 [conn] enable sharding on: test.foo2 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] going to create 1 chunk(s) for: test.foo2 using new epoch 4fd977b30a24f327487567a3
m30001| Thu Jun 14 01:33:39 [conn4] build index test.foo2 { num: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 5 version: 1|0||4fd977b30a24f327487567a3 based on: (empty)
m30999| Thu Jun 14 01:33:39 [conn] resetting shard version of test.foo2 on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0000 localhost:30000 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a3'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo2", need_authoritative: true, errmsg: "first time for collection 'test.foo2'", ok: 0.0 }
m30001| Thu Jun 14 01:33:39 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo3 { _id: 1 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn3] info: creating collection test.foo3 on add index
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo3 { a: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a3'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo3",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"a" : 1
},
"unique" : true,
"ns" : "test.foo3",
"name" : "a_1"
}
]
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo7 { _id: 1 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn3] info: creating collection test.foo7 on add index
m30001| Thu Jun 14 01:33:39 [conn3] build index test.foo7 { num: 1.0, a: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] sharded index write for test.system.indexes
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo7",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"num" : 1,
"a" : 1
},
"unique" : true,
"ns" : "test.foo7",
"name" : "num_1_a_1"
}
]
m30999| Thu Jun 14 01:33:39 [conn] CMD: shardcollection: { shardcollection: "test.foo7", key: { num: 1.0 } }
m30999| Thu Jun 14 01:33:39 [conn] enable sharding on: test.foo7 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] going to create 1 chunk(s) for: test.foo7 using new epoch 4fd977b30a24f327487567a4
m30001| Thu Jun 14 01:33:39 [conn4] build index test.foo7 { num: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] ChunkManager: time to load chunks for test.foo7: 0ms sequenceNumber: 6 version: 1|0||4fd977b30a24f327487567a4 based on: (empty)
m30999| Thu Jun 14 01:33:39 [conn] resetting shard version of test.foo7 on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0000 localhost:30000 test.foo7 { setShardVersion: "test.foo7", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo7 { setShardVersion: "test.foo7", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a4'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo7", need_authoritative: true, errmsg: "first time for collection 'test.foo7'", ok: 0.0 }
m30001| Thu Jun 14 01:33:39 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo7 { setShardVersion: "test.foo7", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a4'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] about to initiate autosplit: ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 174609 splitThreshold: 921
m30999| Thu Jun 14 01:33:39 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:33:39 [conn3] JS Error: Error: can't use sharded collection from db.eval nofile_b:1
m30001| Thu Jun 14 01:33:39 [conn3] JS Error: Error: can't use sharded collection from db.eval nofile_b:1
m30999| Thu Jun 14 01:33:39 [conn] CMD: shardcollection: { shardcollection: "test.foo4", key: { num: 1.0 }, unique: true }
m30999| Thu Jun 14 01:33:39 [conn] enable sharding on: test.foo4 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] going to create 1 chunk(s) for: test.foo4 using new epoch 4fd977b30a24f327487567a5
m30001| Thu Jun 14 01:33:39 [conn4] build index test.foo4 { _id: 1 }
m30001| Thu Jun 14 01:33:39 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:39 [conn4] info: creating collection test.foo4 on add index
m30001| Thu Jun 14 01:33:39 [conn4] build index test.foo4 { num: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:39 [conn] ChunkManager: time to load chunks for test.foo4: 0ms sequenceNumber: 7 version: 1|0||4fd977b30a24f327487567a5 based on: (empty)
m30999| Thu Jun 14 01:33:39 [conn] resetting shard version of test.foo4 on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0000 localhost:30000 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a5'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo4", need_authoritative: true, errmsg: "first time for collection 'test.foo4'", ok: 0.0 }
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion shard0001 localhost:30001 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a5'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30001| Thu Jun 14 01:33:39 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:39 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:33:39 [conn4] received splitChunk request: { splitChunk: "test.foo4", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 10.0 } ], shardId: "test.foo4-num_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:39 [conn4] created new distributed lock for test.foo4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:39 [conn4] distributed lock 'test.foo4/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b345f5df931002d0a0
m30999| Thu Jun 14 01:33:39 [conn] splitting: test.foo4 shard: ns:test.foo4 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey }
m30001| Thu Jun 14 01:33:39 [conn4] splitChunk accepted at version 1|0||4fd977b30a24f327487567a5
m30001| Thu Jun 14 01:33:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:39-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652019543), what: "split", ns: "test.foo4", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 10.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977b30a24f327487567a5') }, right: { min: { num: 10.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977b30a24f327487567a5') } } }
m30001| Thu Jun 14 01:33:39 [conn4] distributed lock 'test.foo4/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30999| Thu Jun 14 01:33:39 [conn] ChunkManager: time to load chunks for test.foo4: 0ms sequenceNumber: 8 version: 1|2||4fd977b30a24f327487567a5 based on: 1|0||4fd977b30a24f327487567a5
m30999| Thu Jun 14 01:33:39 [conn] CMD: movechunk: { movechunk: "test.foo4", find: { num: 20.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:39 [conn] moving chunk ns: test.foo4 moving ( ns:test.foo4 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:39 [conn4] received moveChunk request: { moveChunk: "test.foo4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo4-num_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:39 [conn4] created new distributed lock for test.foo4 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:39 [conn4] distributed lock 'test.foo4/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b345f5df931002d0a1
m30001| Thu Jun 14 01:33:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:39-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652019546), what: "moveChunk.start", ns: "test.foo4", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk request accepted at version 1|2||4fd977b30a24f327487567a5
m30001| Thu Jun 14 01:33:39 [conn4] moveChunk number of documents: 0
m30000| Thu Jun 14 01:33:39 [migrateThread] build index test.foo4 { _id: 1 }
m30000| Thu Jun 14 01:33:39 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:39 [migrateThread] info: creating collection test.foo4 on add index
m30000| Thu Jun 14 01:33:39 [migrateThread] build index test.foo4 { num: 1.0 }
m30000| Thu Jun 14 01:33:39 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo4' { num: 10.0 } -> { num: MaxKey }
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo4", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk setting version to: 2|0||4fd977b30a24f327487567a5
m30000| Thu Jun 14 01:33:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo4' { num: 10.0 } -> { num: MaxKey }
m30000| Thu Jun 14 01:33:40 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:40-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652020558), what: "moveChunk.to", ns: "test.foo4", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo4", from: "localhost:30001", min: { num: 10.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk updating self version to: 2|1||4fd977b30a24f327487567a5 through { num: MinKey } -> { num: 10.0 } for collection 'test.foo4'
m30001| Thu Jun 14 01:33:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:40-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652020562), what: "moveChunk.commit", ns: "test.foo4", details: { min: { num: 10.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:40 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:33:40 [conn4] distributed lock 'test.foo4/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30001| Thu Jun 14 01:33:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:40-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652020563), what: "moveChunk.from", ns: "test.foo4", details: { min: { num: 10.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:40 [conn4] command admin.$cmd command: { moveChunk: "test.foo4", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 10.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo4-num_10.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:120 r:1031 w:1079554 reslen:37 1017ms
m30999| Thu Jun 14 01:33:40 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:40 [conn] ChunkManager: time to load chunks for test.foo4: 0ms sequenceNumber: 9 version: 2|1||4fd977b30a24f327487567a5 based on: 1|2||4fd977b30a24f327487567a5
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion shard0001 localhost:30001 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977b30a24f327487567a5'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977b30a24f327487567a5'), ok: 1.0 }
m30999| Thu Jun 14 01:33:40 [conn] about to initiate autosplit: ns:test.foo4 at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { num: MinKey } max: { num: 10.0 } dataWritten: 2978438 splitThreshold: 471859
m30999| Thu Jun 14 01:33:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion shard0000 localhost:30000 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a5'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo4", need_authoritative: true, errmsg: "first time for collection 'test.foo4'", ok: 0.0 }
m30000| Thu Jun 14 01:33:40 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion shard0000 localhost:30000 test.foo4 { setShardVersion: "test.foo4", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977b30a24f327487567a5'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9108358
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:40 [conn] about to initiate autosplit: ns:test.foo4 at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { num: 10.0 } max: { num: MaxKey } dataWritten: 6093137 splitThreshold: 471859
m30999| Thu Jun 14 01:33:40 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:33:40 [conn3] build index test.foo4a { _id: 1 }
m30001| Thu Jun 14 01:33:40 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:40 [conn3] CMD: drop test.tmp.convertToCapped.foo4a
m30001| Thu Jun 14 01:33:40 [conn3] CMD: drop test.foo4a
m30001| Thu Jun 14 01:33:40 [conn3] build index test.foo6 { _id: 1 }
m30001| Thu Jun 14 01:33:40 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:40 [conn3] build index test.foo6 { a: 1.0 }
m30001| Thu Jun 14 01:33:40 [conn3] build index done. scanned 3 total records. 0 secs
m30999| Thu Jun 14 01:33:40 [conn] sharded index write for test.system.indexes
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo6",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"a" : 1
},
"ns" : "test.foo6",
"name" : "a_1"
}
]
m30999| Thu Jun 14 01:33:40 [conn] CMD: shardcollection: { shardcollection: "test.foo6", key: { a: 1.0 } }
m30999| Thu Jun 14 01:33:40 [conn] enable sharding on: test.foo6 with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:33:40 [conn] going to create 1 chunk(s) for: test.foo6 using new epoch 4fd977b40a24f327487567a6
m30999| Thu Jun 14 01:33:40 [conn] ChunkManager: time to load chunks for test.foo6: 0ms sequenceNumber: 10 version: 1|0||4fd977b40a24f327487567a6 based on: (empty)
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion shard0001 localhost:30001 test.foo6 { setShardVersion: "test.foo6", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b40a24f327487567a6'), serverID: ObjectId('4fd977b00a24f327487567a0'), shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo6", need_authoritative: true, errmsg: "first time for collection 'test.foo6'", ok: 0.0 }
m30001| Thu Jun 14 01:33:40 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion shard0001 localhost:30001 test.foo6 { setShardVersion: "test.foo6", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977b40a24f327487567a6'), serverID: ObjectId('4fd977b00a24f327487567a0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9109e18
m30999| Thu Jun 14 01:33:40 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:33:40 [conn4] received splitChunk request: { splitChunk: "test.foo6", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 2.0 } ], shardId: "test.foo6-a_MinKey", configdb: "localhost:30000" }
m30999| Thu Jun 14 01:33:40 [conn] splitting: test.foo6 shard: ns:test.foo6 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey }
m30001| Thu Jun 14 01:33:40 [conn4] created new distributed lock for test.foo6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:40 [conn4] distributed lock 'test.foo6/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b445f5df931002d0a2
m30001| Thu Jun 14 01:33:40 [conn4] splitChunk accepted at version 1|0||4fd977b40a24f327487567a6
m30001| Thu Jun 14 01:33:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:40-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652020597), what: "split", ns: "test.foo6", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 2.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977b40a24f327487567a6') }, right: { min: { a: 2.0 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977b40a24f327487567a6') } } }
m30001| Thu Jun 14 01:33:40 [conn4] distributed lock 'test.foo6/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30999| Thu Jun 14 01:33:40 [conn] ChunkManager: time to load chunks for test.foo6: 0ms sequenceNumber: 11 version: 1|2||4fd977b40a24f327487567a6 based on: 1|0||4fd977b40a24f327487567a6
m30999| Thu Jun 14 01:33:40 [conn] CMD: movechunk: { movechunk: "test.foo6", find: { a: 3.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:40 [conn] moving chunk ns: test.foo6 moving ( ns:test.foo6 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 2.0 } max: { a: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:40 [conn4] received moveChunk request: { moveChunk: "test.foo6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 2.0 }, max: { a: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo6-a_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:40 [conn4] created new distributed lock for test.foo6 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:40 [conn4] distributed lock 'test.foo6/domU-12-31-39-01-70-B4:30001:1339652017:512547722' acquired, ts : 4fd977b445f5df931002d0a3
m30001| Thu Jun 14 01:33:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:40-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652020601), what: "moveChunk.start", ns: "test.foo6", details: { min: { a: 2.0 }, max: { a: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk request accepted at version 1|2||4fd977b40a24f327487567a6
m30001| Thu Jun 14 01:33:40 [conn4] moveChunk number of documents: 2
m30000| Thu Jun 14 01:33:40 [migrateThread] build index test.foo6 { _id: 1 }
m30000| Thu Jun 14 01:33:40 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:40 [migrateThread] info: creating collection test.foo6 on add index
m30000| Thu Jun 14 01:33:40 [migrateThread] build index test.foo6 { a: 1.0 }
m30000| Thu Jun 14 01:33:40 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo6' { a: 2.0 } -> { a: MaxKey }
m30001| Thu Jun 14 01:33:41 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo6", from: "localhost:30001", min: { a: 2.0 }, max: { a: MaxKey }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 2, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:41 [conn4] moveChunk setting version to: 2|0||4fd977b40a24f327487567a6
m30000| Thu Jun 14 01:33:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo6' { a: 2.0 } -> { a: MaxKey }
m30000| Thu Jun 14 01:33:41 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:41-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652021610), what: "moveChunk.to", ns: "test.foo6", details: { min: { a: 2.0 }, max: { a: MaxKey }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1006 } }
m30001| Thu Jun 14 01:33:41 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo6", from: "localhost:30001", min: { a: 2.0 }, max: { a: MaxKey }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 2, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:41 [conn4] moveChunk updating self version to: 2|1||4fd977b40a24f327487567a6 through { a: MinKey } -> { a: 2.0 } for collection 'test.foo6'
m30001| Thu Jun 14 01:33:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:41-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652021614), what: "moveChunk.commit", ns: "test.foo6", details: { min: { a: 2.0 }, max: { a: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:41 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:41 [conn4] moveChunk deleted: 2
m30001| Thu Jun 14 01:33:41 [conn4] distributed lock 'test.foo6/domU-12-31-39-01-70-B4:30001:1339652017:512547722' unlocked.
m30001| Thu Jun 14 01:33:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:41-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42724", time: new Date(1339652021615), what: "moveChunk.from", ns: "test.foo6", details: { min: { a: 2.0 }, max: { a: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:41 [conn4] command admin.$cmd command: { moveChunk: "test.foo6", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 2.0 }, max: { a: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo6-a_2.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:120 r:1490 w:1079837 reslen:37 1015ms
m30999| Thu Jun 14 01:33:41 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:41 [conn] ChunkManager: time to load chunks for test.foo6: 0ms sequenceNumber: 12 version: 2|1||4fd977b40a24f327487567a6 based on: 1|2||4fd977b40a24f327487567a6
m30001| Thu Jun 14 01:33:41 [conn3] build index test.foo8 { _id: 1 }
m30001| Thu Jun 14 01:33:41 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:41 [conn3] build index test.foo9 { _id: 1 }
m30001| Thu Jun 14 01:33:41 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:41 [conn3] build index test.foo9 { a: 1.0 }
m30001| Thu Jun 14 01:33:41 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:41 [conn] sharded index write for test.system.indexes
m30001| Thu Jun 14 01:33:41 [conn4] checkShardingIndex for 'test.foo9' failed: found null value in key { a: null } for doc: _id: ObjectId('4fd977b54886a933ca894959')
m30999| Thu Jun 14 01:33:41 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:33:41 [conn3] end connection 127.0.0.1:51307 (9 connections now open)
m30000| Thu Jun 14 01:33:41 [conn6] end connection 127.0.0.1:51312 (8 connections now open)
m30000| Thu Jun 14 01:33:41 [conn5] end connection 127.0.0.1:51311 (8 connections now open)
m30000| Thu Jun 14 01:33:41 [conn7] end connection 127.0.0.1:51315 (6 connections now open)
m30001| Thu Jun 14 01:33:41 [conn3] end connection 127.0.0.1:42723 (4 connections now open)
m30001| Thu Jun 14 01:33:41 [conn4] end connection 127.0.0.1:42724 (3 connections now open)
Thu Jun 14 01:33:42 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:33:42 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:33:42 [interruptThread] now exiting
m30000| Thu Jun 14 01:33:42 dbexit:
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:33:42 [interruptThread] closing listening socket: 21
m30000| Thu Jun 14 01:33:42 [interruptThread] closing listening socket: 22
m30000| Thu Jun 14 01:33:42 [interruptThread] closing listening socket: 23
m30000| Thu Jun 14 01:33:42 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:33:42 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:33:42 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:33:42 dbexit: really exiting now
m30001| Thu Jun 14 01:33:42 [conn5] end connection 127.0.0.1:42727 (2 connections now open)
Thu Jun 14 01:33:43 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:33:43 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:33:43 [interruptThread] now exiting
m30001| Thu Jun 14 01:33:43 dbexit:
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:33:43 [interruptThread] closing listening socket: 24
m30001| Thu Jun 14 01:33:43 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:33:43 [interruptThread] closing listening socket: 26
m30001| Thu Jun 14 01:33:43 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:33:43 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:43 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:33:43 dbexit: really exiting now
Thu Jun 14 01:33:44 shell: stopped mongo program on port 30001
*** ShardingTest features1 completed successfully in 9.39 seconds ***
9435.635090ms
Thu Jun 14 01:33:44 [initandlisten] connection accepted from 127.0.0.1:54825 #26 (13 connections now open)
*******************************************
Test : features2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/features2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/features2.js";TestData.testFile = "features2.js";TestData.testName = "features2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:33:44 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/features20'
Thu Jun 14 01:33:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/features20
m30000| Thu Jun 14 01:33:44
m30000| Thu Jun 14 01:33:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:33:44
m30000| Thu Jun 14 01:33:44 [initandlisten] MongoDB starting : pid=24501 port=30000 dbpath=/data/db/features20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:33:44 [initandlisten]
m30000| Thu Jun 14 01:33:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:33:44 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:33:44 [initandlisten]
m30000| Thu Jun 14 01:33:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:33:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:33:44 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:33:44 [initandlisten]
m30000| Thu Jun 14 01:33:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:33:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:33:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:33:44 [initandlisten] options: { dbpath: "/data/db/features20", port: 30000 }
m30000| Thu Jun 14 01:33:44 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:33:44 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/features21'
m30000| Thu Jun 14 01:33:44 [initandlisten] connection accepted from 127.0.0.1:51324 #1 (1 connection now open)
Thu Jun 14 01:33:44 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/features21
m30001| Thu Jun 14 01:33:44
m30001| Thu Jun 14 01:33:44 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:33:44
m30001| Thu Jun 14 01:33:44 [initandlisten] MongoDB starting : pid=24514 port=30001 dbpath=/data/db/features21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:33:44 [initandlisten]
m30001| Thu Jun 14 01:33:44 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:33:44 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:33:44 [initandlisten]
m30001| Thu Jun 14 01:33:44 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:33:44 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:33:44 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:33:44 [initandlisten]
m30001| Thu Jun 14 01:33:44 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:33:44 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:33:44 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:33:44 [initandlisten] options: { dbpath: "/data/db/features21", port: 30001 }
m30001| Thu Jun 14 01:33:44 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:33:44 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:42733 #1 (1 connection now open)
m30000| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:51327 #2 (2 connections now open)
ShardingTest features2 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:33:45 [FileAllocator] allocating new datafile /data/db/features20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:45 [FileAllocator] creating directory /data/db/features20/_tmp
Thu Jun 14 01:33:45 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30999| Thu Jun 14 01:33:45 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:33:45 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24529 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:33:45 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:33:45 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:33:45 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:33:45 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:33:45 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:45 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:51329 #3 (3 connections now open)
m30000| Thu Jun 14 01:33:45 [FileAllocator] done allocating datafile /data/db/features20/config.ns, size: 16MB, took 0.264 secs
m30000| Thu Jun 14 01:33:45 [FileAllocator] allocating new datafile /data/db/features20/config.0, filling with zeroes...
m30000| Thu Jun 14 01:33:45 [FileAllocator] done allocating datafile /data/db/features20/config.0, size: 16MB, took 0.248 secs
m30000| Thu Jun 14 01:33:45 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn2] insert config.settings keyUpdates:0 locks(micros) w:529522 529ms
m30000| Thu Jun 14 01:33:45 [FileAllocator] allocating new datafile /data/db/features20/config.1, filling with zeroes...
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:33:45 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:45 [mongosMain] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:51332 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:45 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:45 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:51333 #5 (5 connections now open)
m30000| Thu Jun 14 01:33:45 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:45 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:33:45 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:45 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:33:45 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:33:45 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:33:45 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:45 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:45 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:33:45 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:33:45 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:33:45 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:33:45
m30999| Thu Jun 14 01:33:45 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:33:45 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:45 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:45 [Balancer] connected connection!
m30000| Thu Jun 14 01:33:45 [initandlisten] connection accepted from 127.0.0.1:51334 #6 (6 connections now open)
m30000| Thu Jun 14 01:33:45 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:45 [Balancer] Refreshing MaxChunkSize: 50
m30000| Thu Jun 14 01:33:45 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:45 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:45 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:33:45 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652025:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652025:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652025:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:33:45 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977b9b62151f2b25ed3c5" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:33:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652025:1804289383' acquired, ts : 4fd977b9b62151f2b25ed3c5
m30999| Thu Jun 14 01:33:45 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:33:45 [Balancer] no collections to balance
m30999| Thu Jun 14 01:33:45 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:33:45 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:33:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652025:1804289383' unlocked.
m30999| Thu Jun 14 01:33:45 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652025:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:33:45 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:33:45 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652025:1804289383', sleeping for 30000ms
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:33:45 [mongosMain] connection accepted from 127.0.0.1:43314 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:45 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:33:45 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:33:45 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:45 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:33:46 [FileAllocator] done allocating datafile /data/db/features20/config.1, size: 32MB, took 0.53 secs
m30000| Thu Jun 14 01:33:46 [conn5] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:301 w:1826 reslen:177 393ms
m30999| Thu Jun 14 01:33:46 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:33:46 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:42743 #2 (2 connections now open)
m30999| Thu Jun 14 01:33:46 [conn] connected connection!
m30999| Thu Jun 14 01:33:46 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:33:46 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:46 [conn] connected connection!
m30999| Thu Jun 14 01:33:46 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977b9b62151f2b25ed3c4
m30999| Thu Jun 14 01:33:46 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: WriteBackListener-localhost:30000
m30000| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:51337 #7 (7 connections now open)
m30999| Thu Jun 14 01:33:46 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:42745 #3 (3 connections now open)
m30999| Thu Jun 14 01:33:46 [conn] connected connection!
m30999| Thu Jun 14 01:33:46 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977b9b62151f2b25ed3c4
Waiting for active hosts...
Waiting for the balancer lock...
m30999| Thu Jun 14 01:33:46 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: WriteBackListener-localhost:30001
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:33:46 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:33:46 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:46 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:42746 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:46 [conn] connected connection!
m30999| Thu Jun 14 01:33:46 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:33:46 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:33:46 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:33:46 [conn] sharded index write for test.system.indexes
m30001| Thu Jun 14 01:33:46 [FileAllocator] allocating new datafile /data/db/features21/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:33:46 [FileAllocator] creating directory /data/db/features21/_tmp
m30001| Thu Jun 14 01:33:46 [FileAllocator] done allocating datafile /data/db/features21/test.ns, size: 16MB, took 0.285 secs
m30001| Thu Jun 14 01:33:46 [FileAllocator] allocating new datafile /data/db/features21/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:46 [FileAllocator] done allocating datafile /data/db/features21/test.0, size: 16MB, took 0.29 secs
m30001| Thu Jun 14 01:33:46 [conn3] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:33:46 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:46 [conn3] insert test.foo keyUpdates:0 locks(micros) W:51 w:587681 587ms
m30001| Thu Jun 14 01:33:46 [conn3] build index test.foo { x: 1.0 }
m30001| Thu Jun 14 01:33:46 [conn3] build index done. scanned 3 total records. 0 secs
m30001| Thu Jun 14 01:33:46 [FileAllocator] allocating new datafile /data/db/features21/test.1, filling with zeroes...
m30000| Thu Jun 14 01:33:46 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:33:46 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:51340 #8 (8 connections now open)
m30000| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:51341 #9 (9 connections now open)
m30000| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:51342 #10 (10 connections now open)
m30001| Thu Jun 14 01:33:46 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:46 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 2.0 } ], shardId: "test.foo-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:46 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:46 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' acquired, ts : 4fd977bab711af7b6f1d230d
m30001| Thu Jun 14 01:33:46 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652026:1983276373 (sleeping for 30000ms)
m30001| Thu Jun 14 01:33:46 [conn4] splitChunk accepted at version 1|0||4fd977bab62151f2b25ed3c6
m30001| Thu Jun 14 01:33:46 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:46-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652026834), what: "split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 2.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977bab62151f2b25ed3c6') }, right: { min: { x: 2.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977bab62151f2b25ed3c6') } } }
m30001| Thu Jun 14 01:33:46 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' unlocked.
m30001| Thu Jun 14 01:33:46 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 2.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-x_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:46 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:46 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' acquired, ts : 4fd977bab711af7b6f1d230e
m30001| Thu Jun 14 01:33:46 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:46-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652026838), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 2.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:46 [conn4] moveChunk request accepted at version 1|2||4fd977bab62151f2b25ed3c6
m30001| Thu Jun 14 01:33:46 [conn4] moveChunk number of documents: 2
m30001| Thu Jun 14 01:33:46 [initandlisten] connection accepted from 127.0.0.1:42750 #5 (5 connections now open)
m30000| Thu Jun 14 01:33:46 [FileAllocator] allocating new datafile /data/db/features20/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:33:46 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { x: 1.0 } }
m30999| Thu Jun 14 01:33:46 [conn] enable sharding on: test.foo with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:33:46 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd977bab62151f2b25ed3c6
m30999| Thu Jun 14 01:33:46 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd977bab62151f2b25ed3c6 based on: (empty)
m30999| Thu Jun 14 01:33:46 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:46 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:46 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:46 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:46 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey }
m30999| Thu Jun 14 01:33:46 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd977bab62151f2b25ed3c6 based on: 1|0||4fd977bab62151f2b25ed3c6
m30999| Thu Jun 14 01:33:46 [conn] CMD: movechunk: { movechunk: "test.foo", find: { x: 3.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:46 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 2.0 } max: { x: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:47 [FileAllocator] done allocating datafile /data/db/features21/test.1, size: 32MB, took 0.84 secs
m30000| Thu Jun 14 01:33:47 [FileAllocator] done allocating datafile /data/db/features20/test.ns, size: 16MB, took 0.805 secs
m30000| Thu Jun 14 01:33:47 [FileAllocator] allocating new datafile /data/db/features20/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:47 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 2.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:47 [FileAllocator] done allocating datafile /data/db/features20/test.0, size: 16MB, took 0.3 secs
m30000| Thu Jun 14 01:33:47 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:33:47 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:47 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:33:47 [migrateThread] build index test.foo { x: 1.0 }
m30000| Thu Jun 14 01:33:47 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:47 [FileAllocator] allocating new datafile /data/db/features20/test.1, filling with zeroes...
m30000| Thu Jun 14 01:33:47 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 2.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:33:48 [FileAllocator] done allocating datafile /data/db/features20/test.1, size: 32MB, took 0.57 secs
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 2.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 2, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk setting version to: 2|0||4fd977bab62151f2b25ed3c6
m30000| Thu Jun 14 01:33:48 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 2.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:33:48 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:48-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652028854), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 2.0 }, max: { x: MaxKey }, step1 of 5: 1116, step2 of 5: 0, step3 of 5: 2, step4 of 5: 0, step5 of 5: 895 } }
m30000| Thu Jun 14 01:33:48 [initandlisten] connection accepted from 127.0.0.1:51344 #11 (11 connections now open)
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 2.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 2, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk updating self version to: 2|1||4fd977bab62151f2b25ed3c6 through { x: MinKey } -> { x: 2.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:33:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:48-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652028859), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 2.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:48 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk deleted: 2
m30001| Thu Jun 14 01:33:48 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' unlocked.
m30001| Thu Jun 14 01:33:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:48-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652028860), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 2.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2007, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:48 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 2.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-x_2.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:404 w:326 reslen:37 2022ms
m30999| Thu Jun 14 01:33:48 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||4fd977bab62151f2b25ed3c6 based on: 1|2||4fd977bab62151f2b25ed3c6
{ "millis" : 2023, "ok" : 1 }
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ab19f0
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ab19f0
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:33:48 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977bab62151f2b25ed3c6'), ok: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { x: 2.0 } max: { x: MaxKey } dataWritten: 8083675 splitThreshold: 471859
m30999| Thu Jun 14 01:33:48 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:48 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { x: MinKey } max: { x: 2.0 } dataWritten: 8312780 splitThreshold: 471859
m30999| Thu Jun 14 01:33:48 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:33:48 [conn3] build index test.foo2 { _id: 1 }
m30001| Thu Jun 14 01:33:48 [conn3] build index done. scanned 0 total records. 0 secs
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo2",
"name" : "_id_"
}
]
m30999| Thu Jun 14 01:33:48 [conn] CMD: shardcollection: { shardcollection: "test.foo2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:48 [conn] enable sharding on: test.foo2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] going to create 1 chunk(s) for: test.foo2 using new epoch 4fd977bcb62151f2b25ed3c7
m30999| Thu Jun 14 01:33:48 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 5 version: 1|0||4fd977bcb62151f2b25ed3c7 based on: (empty)
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c7'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo2", need_authoritative: true, errmsg: "first time for collection 'test.foo2'", ok: 0.0 }
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0001 localhost:30001 test.foo2 { setShardVersion: "test.foo2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c7'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30001| Thu Jun 14 01:33:48 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] about to initiate autosplit: ns:test.foo2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 174585 splitThreshold: 921
m30999| Thu Jun 14 01:33:48 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:48 [conn] sharded index write for test.system.indexes
before
m30001| Thu Jun 14 01:33:48 [conn3] build index test.mr { _id: 1 }
m30001| Thu Jun 14 01:33:48 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:48 [conn3] build index test.mr { x: 1.0 }
m30001| Thu Jun 14 01:33:48 [conn3] build index done. scanned 4 total records. 0 secs
m30999| Thu Jun 14 01:33:48 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.tmp.mr.mr_0_inc
m30001| Thu Jun 14 01:33:48 [conn3] build index test.tmp.mr.mr_0_inc { 0: 1 }
m30001| Thu Jun 14 01:33:48 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.tmp.mr.mr_0
m30001| Thu Jun 14 01:33:48 [conn3] build index test.tmp.mr.mr_0 { _id: 1 }
m30001| Thu Jun 14 01:33:48 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.smr1_out
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.tmp.mr.mr_0
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.tmp.mr.mr_0
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.tmp.mr.mr_0_inc
{
"result" : "smr1_out",
"timeMillis" : 35,
"counts" : {
"input" : 4,
"emit" : 8,
"reduce" : 3,
"output" : 3
},
"ok" : 1,
}
m30001| Thu Jun 14 01:33:48 [conn3] CMD: drop test.smr1_out
m30999| Thu Jun 14 01:33:48 [conn] DROP: test.smr1_out
m30999| Thu Jun 14 01:33:48 [conn] simple MR, just passthrough
{
"results" : [
{
"_id" : "a",
"value" : {
"count" : 2
}
},
{
"_id" : "b",
"value" : {
"count" : 3
}
},
{
"_id" : "c",
"value" : {
"count" : 3
}
}
],
"timeMillis" : 15,
"counts" : {
"input" : 4,
"emit" : 8,
"reduce" : 3,
"output" : 3
},
"ok" : 1,
}
{ "a" : 2, "b" : 3, "c" : 3 }
m30001| Thu Jun 14 01:33:48 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:33:48 [conn4] received splitChunk request: { splitChunk: "test.mr", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 2.0 } ], shardId: "test.mr-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:48 [conn4] created new distributed lock for test.mr on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:48 [conn4] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' acquired, ts : 4fd977bcb711af7b6f1d230f
m30001| Thu Jun 14 01:33:48 [conn4] splitChunk accepted at version 1|0||4fd977bcb62151f2b25ed3c8
m30001| Thu Jun 14 01:33:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:48-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652028954), what: "split", ns: "test.mr", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 2.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') }, right: { min: { x: 2.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') } } }
m30001| Thu Jun 14 01:33:48 [conn4] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' unlocked.
m30999| Thu Jun 14 01:33:48 [conn] CMD: shardcollection: { shardcollection: "test.mr", key: { x: 1.0 } }
m30999| Thu Jun 14 01:33:48 [conn] enable sharding on: test.mr with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] going to create 1 chunk(s) for: test.mr using new epoch 4fd977bcb62151f2b25ed3c8
m30999| Thu Jun 14 01:33:48 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 6 version: 1|0||4fd977bcb62151f2b25ed3c8 based on: (empty)
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0001 localhost:30001 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mr", need_authoritative: true, errmsg: "first time for collection 'test.mr'", ok: 0.0 }
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion shard0001 localhost:30001 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:48 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:48 [conn] splitting: test.mr shard: ns:test.mr at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey }
m30999| Thu Jun 14 01:33:48 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 7 version: 1|2||4fd977bcb62151f2b25ed3c8 based on: 1|0||4fd977bcb62151f2b25ed3c8
m30999| Thu Jun 14 01:33:48 [conn] CMD: movechunk: { movechunk: "test.mr", find: { x: 3.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:33:48 [conn] moving chunk ns: test.mr moving ( ns:test.mr at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 2.0 } max: { x: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:48 [conn4] received moveChunk request: { moveChunk: "test.mr", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 2.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.mr-x_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:48 [conn4] created new distributed lock for test.mr on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:48 [conn4] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' acquired, ts : 4fd977bcb711af7b6f1d2310
m30001| Thu Jun 14 01:33:48 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:48-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652028957), what: "moveChunk.start", ns: "test.mr", details: { min: { x: 2.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk request accepted at version 1|2||4fd977bcb62151f2b25ed3c8
m30001| Thu Jun 14 01:33:48 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:33:48 [migrateThread] build index test.mr { _id: 1 }
m30000| Thu Jun 14 01:33:48 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:48 [migrateThread] info: creating collection test.mr on add index
m30000| Thu Jun 14 01:33:48 [migrateThread] build index test.mr { x: 1.0 }
m30000| Thu Jun 14 01:33:48 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:48 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mr' { x: 2.0 } -> { x: MaxKey }
m30001| Thu Jun 14 01:33:49 [conn4] moveChunk data transfer progress: { active: true, ns: "test.mr", from: "localhost:30001", min: { x: 2.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 186, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:49 [conn4] moveChunk setting version to: 2|0||4fd977bcb62151f2b25ed3c8
m30000| Thu Jun 14 01:33:49 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mr' { x: 2.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:33:49 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:49-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652029966), what: "moveChunk.to", ns: "test.mr", details: { min: { x: 2.0 }, max: { x: MaxKey }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1007 } }
m30001| Thu Jun 14 01:33:49 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mr", from: "localhost:30001", min: { x: 2.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 186, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:49 [conn4] moveChunk updating self version to: 2|1||4fd977bcb62151f2b25ed3c8 through { x: MinKey } -> { x: 2.0 } for collection 'test.mr'
m30001| Thu Jun 14 01:33:49 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:49-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652029971), what: "moveChunk.commit", ns: "test.mr", details: { min: { x: 2.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:49 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:49 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:33:49 [conn4] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30001:1339652026:1983276373' unlocked.
m30001| Thu Jun 14 01:33:49 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:49-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42746", time: new Date(1339652029971), what: "moveChunk.from", ns: "test.mr", details: { min: { x: 2.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:49 [conn4] command admin.$cmd command: { moveChunk: "test.mr", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 2.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.mr-x_2.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:864 w:601 reslen:37 1015ms
m30999| Thu Jun 14 01:33:49 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:49 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 8 version: 2|1||4fd977bcb62151f2b25ed3c8 based on: 1|2||4fd977bcb62151f2b25ed3c8
{ "millis" : 1016, "ok" : 1 }
after
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion shard0000 localhost:30000 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ab19f0
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mr", need_authoritative: true, errmsg: "first time for collection 'test.mr'", ok: 0.0 }
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion shard0000 localhost:30000 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ab19f0
m30000| Thu Jun 14 01:33:49 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion shard0001 localhost:30001 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30999| Thu Jun 14 01:33:49 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), ok: 1.0 }
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mr.mr_1_inc
m30001| Thu Jun 14 01:33:49 [conn3] build index test.tmp.mr.mr_1_inc { 0: 1 }
m30001| Thu Jun 14 01:33:49 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mr.mr_1
m30001| Thu Jun 14 01:33:49 [conn3] build index test.tmp.mr.mr_1 { _id: 1 }
m30001| Thu Jun 14 01:33:49 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mrs.mr_1339652029_2
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mr.mr_1
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mr.mr_1
m30001| Thu Jun 14 01:33:49 [conn3] CMD: drop test.tmp.mr.mr_1_inc
m30000| Thu Jun 14 01:33:49 [conn7] CMD: drop test.tmp.mr.mr_0_inc
m30000| Thu Jun 14 01:33:49 [conn7] build index test.tmp.mr.mr_0_inc { 0: 1 }
m30000| Thu Jun 14 01:33:49 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:49 [conn7] CMD: drop test.tmp.mr.mr_0
m30000| Thu Jun 14 01:33:49 [conn7] build index test.tmp.mr.mr_0 { _id: 1 }
m30000| Thu Jun 14 01:33:49 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:49 [conn7] CMD: drop test.tmp.mrs.mr_1339652029_2
m30000| Thu Jun 14 01:33:49 [conn7] CMD: drop test.tmp.mr.mr_0
m30000| Thu Jun 14 01:33:49 [conn7] CMD: drop test.tmp.mr.mr_0
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_0_inc
m30999| Thu Jun 14 01:33:50 [conn] MR with single shard output, NS= primary=shard0001:localhost:30001
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_2
m30001| Thu Jun 14 01:33:50 [conn3] build index test.tmp.mr.mr_2 { _id: 1 }
m30001| Thu Jun 14 01:33:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:50 [conn3] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 2|1||4fd977bab62151f2b25ed3c6 based on: (empty)
m30001| Thu Jun 14 01:33:50 [conn3] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 3 version: 1|0||4fd977bcb62151f2b25ed3c7 based on: (empty)
m30001| Thu Jun 14 01:33:50 [conn3] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 4 version: 2|1||4fd977bcb62151f2b25ed3c8 based on: (empty)
m30001| Thu Jun 14 01:33:50 [initandlisten] connection accepted from 127.0.0.1:42753 #6 (6 connections now open)
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.smr1_out
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_2
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_2
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_2
m30000| Thu Jun 14 01:33:50 [initandlisten] connection accepted from 127.0.0.1:51345 #12 (12 connections now open)
m30000| Thu Jun 14 01:33:50 [conn6] CMD: drop test.tmp.mrs.mr_1339652029_2
m30001| Thu Jun 14 01:33:50 [conn4] CMD: drop test.tmp.mrs.mr_1339652029_2
{
"result" : "smr1_out",
"counts" : {
"input" : NumberLong(4),
"emit" : NumberLong(8),
"reduce" : NumberLong(4),
"output" : NumberLong(3)
},
"timeMillis" : 32,
"timing" : {
"shardProcessing" : 26,
"postProcessing" : 5
},
"shardCounts" : {
"localhost:30000" : {
"input" : 3,
"emit" : 6,
"reduce" : 2,
"output" : 3
},
"localhost:30001" : {
"input" : 1,
"emit" : 2,
"reduce" : 0,
"output" : 2
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(5),
"reduce" : NumberLong(2),
"output" : NumberLong(3)
}
},
"ok" : 1,
}
m30999| Thu Jun 14 01:33:50 [conn] DROP: test.smr1_out
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.smr1_out
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_3_inc
m30001| Thu Jun 14 01:33:50 [conn3] build index test.tmp.mr.mr_3_inc { 0: 1 }
m30001| Thu Jun 14 01:33:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_3
m30001| Thu Jun 14 01:33:50 [conn3] build index test.tmp.mr.mr_3 { _id: 1 }
m30001| Thu Jun 14 01:33:50 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mrs.mr_1339652030_3
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_3
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_3
m30001| Thu Jun 14 01:33:50 [conn3] CMD: drop test.tmp.mr.mr_3_inc
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_1_inc
m30000| Thu Jun 14 01:33:50 [conn7] build index test.tmp.mr.mr_1_inc { 0: 1 }
m30000| Thu Jun 14 01:33:50 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_1
m30000| Thu Jun 14 01:33:50 [conn7] build index test.tmp.mr.mr_1 { _id: 1 }
m30000| Thu Jun 14 01:33:50 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mrs.mr_1339652030_3
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_1
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_1
m30000| Thu Jun 14 01:33:50 [conn7] CMD: drop test.tmp.mr.mr_1_inc
m30999| Thu Jun 14 01:33:50 [conn] MR with single shard output, NS=test. primary=shard0001:localhost:30001
m30000| Thu Jun 14 01:33:50 [conn6] CMD: drop test.tmp.mrs.mr_1339652030_3
m30001| Thu Jun 14 01:33:50 [conn4] CMD: drop test.tmp.mrs.mr_1339652030_3
{
"results" : [
{
"_id" : "a",
"value" : {
"count" : 2
}
},
{
"_id" : "b",
"value" : {
"count" : 3
}
},
{
"_id" : "c",
"value" : {
"count" : 3
}
}
],
"counts" : {
"input" : NumberLong(4),
"emit" : NumberLong(8),
"reduce" : NumberLong(4),
"output" : NumberLong(3)
},
"timeMillis" : 5,
"timing" : {
"shardProcessing" : 3,
"postProcessing" : 1
},
"shardCounts" : {
"localhost:30000" : {
"input" : 3,
"emit" : 6,
"reduce" : 2,
"output" : 3
},
"localhost:30001" : {
"input" : 1,
"emit" : 2,
"reduce" : 0,
"output" : 2
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(5),
"reduce" : NumberLong(2),
"output" : NumberLong(3)
}
},
"ok" : 1,
}
{ "a" : 2, "b" : 3, "c" : 3 }
m30999| Thu Jun 14 01:33:50 [conn] splitting: test.mr shard: ns:test.mr at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { x: 2.0 } max: { x: MaxKey }
m30000| Thu Jun 14 01:33:50 [initandlisten] connection accepted from 127.0.0.1:51347 #13 (13 connections now open)
m30000| Thu Jun 14 01:33:50 [conn6] received splitChunk request: { splitChunk: "test.mr", keyPattern: { x: 1.0 }, min: { x: 2.0 }, max: { x: MaxKey }, from: "shard0000", splitKeys: [ { x: 3.0 } ], shardId: "test.mr-x_2.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:50 [conn6] created new distributed lock for test.mr on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:50 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339652030:67419980 (sleeping for 30000ms)
m30000| Thu Jun 14 01:33:50 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' acquired, ts : 4fd977be715e26d181eb98c1
m30000| Thu Jun 14 01:33:50 [conn6] splitChunk accepted at version 2|0||4fd977bcb62151f2b25ed3c8
m30000| Thu Jun 14 01:33:50 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:50-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51334", time: new Date(1339652030020), what: "split", ns: "test.mr", details: { before: { min: { x: 2.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 2.0 }, max: { x: 3.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') }, right: { min: { x: 3.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') } } }
m30000| Thu Jun 14 01:33:50 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' unlocked.
m30999| Thu Jun 14 01:33:50 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 9 version: 2|3||4fd977bcb62151f2b25ed3c8 based on: 2|1||4fd977bcb62151f2b25ed3c8
m30999| Thu Jun 14 01:33:50 [conn] splitting: test.mr shard: ns:test.mr at: shard0000:localhost:30000 lastmod: 2|3||000000000000000000000000 min: { x: 3.0 } max: { x: MaxKey }
m30000| Thu Jun 14 01:33:50 [conn6] received splitChunk request: { splitChunk: "test.mr", keyPattern: { x: 1.0 }, min: { x: 3.0 }, max: { x: MaxKey }, from: "shard0000", splitKeys: [ { x: 4.0 } ], shardId: "test.mr-x_3.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:50 [conn6] created new distributed lock for test.mr on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:50 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' acquired, ts : 4fd977be715e26d181eb98c2
m30000| Thu Jun 14 01:33:50 [conn6] splitChunk accepted at version 2|3||4fd977bcb62151f2b25ed3c8
m30000| Thu Jun 14 01:33:50 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:50-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51334", time: new Date(1339652030024), what: "split", ns: "test.mr", details: { before: { min: { x: 3.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 3.0 }, max: { x: 4.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') }, right: { min: { x: 4.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd977bcb62151f2b25ed3c8') } } }
m30000| Thu Jun 14 01:33:50 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' unlocked.
m30999| Thu Jun 14 01:33:50 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 10 version: 2|5||4fd977bcb62151f2b25ed3c8 based on: 2|3||4fd977bcb62151f2b25ed3c8
m30999| Thu Jun 14 01:33:50 [conn] CMD: movechunk: { movechunk: "test.mr", find: { x: 3.0 }, to: "localhost:30001" }
m30999| Thu Jun 14 01:33:50 [conn] moving chunk ns: test.mr moving ( ns:test.mr at: shard0000:localhost:30000 lastmod: 2|4||000000000000000000000000 min: { x: 3.0 } max: { x: 4.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:33:50 [conn6] received moveChunk request: { moveChunk: "test.mr", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { x: 3.0 }, max: { x: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "test.mr-x_3.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:33:50 [conn6] created new distributed lock for test.mr on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:50 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' acquired, ts : 4fd977be715e26d181eb98c3
m30000| Thu Jun 14 01:33:50 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:50-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51334", time: new Date(1339652030028), what: "moveChunk.start", ns: "test.mr", details: { min: { x: 3.0 }, max: { x: 4.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:50 [conn6] moveChunk request accepted at version 2|5||4fd977bcb62151f2b25ed3c8
m30000| Thu Jun 14 01:33:50 [conn6] moveChunk number of documents: 1
m30001| Thu Jun 14 01:33:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mr' { x: 3.0 } -> { x: 4.0 }
m30000| Thu Jun 14 01:33:51 [conn6] moveChunk data transfer progress: { active: true, ns: "test.mr", from: "localhost:30000", min: { x: 3.0 }, max: { x: 4.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 62, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:51 [conn6] moveChunk setting version to: 3|0||4fd977bcb62151f2b25ed3c8
m30001| Thu Jun 14 01:33:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mr' { x: 3.0 } -> { x: 4.0 }
m30001| Thu Jun 14 01:33:51 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:51-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652031038), what: "moveChunk.to", ns: "test.mr", details: { min: { x: 3.0 }, max: { x: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30000| Thu Jun 14 01:33:51 [conn6] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mr", from: "localhost:30000", min: { x: 3.0 }, max: { x: 4.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 62, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:33:51 [conn6] moveChunk updating self version to: 3|1||4fd977bcb62151f2b25ed3c8 through { x: 2.0 } -> { x: 3.0 } for collection 'test.mr'
m30000| Thu Jun 14 01:33:51 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:51-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51334", time: new Date(1339652031043), what: "moveChunk.commit", ns: "test.mr", details: { min: { x: 3.0 }, max: { x: 4.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:33:51 [conn6] doing delete inline
m30000| Thu Jun 14 01:33:51 [conn6] moveChunk deleted: 1
m30000| Thu Jun 14 01:33:51 [conn6] distributed lock 'test.mr/domU-12-31-39-01-70-B4:30000:1339652030:67419980' unlocked.
m30000| Thu Jun 14 01:33:51 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:51-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51334", time: new Date(1339652031044), what: "moveChunk.from", ns: "test.mr", details: { min: { x: 3.0 }, max: { x: 4.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30000| Thu Jun 14 01:33:51 [conn6] command admin.$cmd command: { moveChunk: "test.mr", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { x: 3.0 }, max: { x: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "test.mr-x_3.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:33432 w:2142 reslen:37 1017ms
m30999| Thu Jun 14 01:33:51 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:33:51 [conn] ChunkManager: time to load chunks for test.mr: 0ms sequenceNumber: 11 version: 3|1||4fd977bcb62151f2b25ed3c8 based on: 2|5||4fd977bcb62151f2b25ed3c8
after extra split
m30999| Thu Jun 14 01:33:51 [conn] setShardVersion shard0000 localhost:30000 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ab19f0
m30999| Thu Jun 14 01:33:51 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), ok: 1.0 }
m30999| Thu Jun 14 01:33:51 [conn] setShardVersion shard0001 localhost:30001 test.mr { setShardVersion: "test.mr", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), serverID: ObjectId('4fd977b9b62151f2b25ed3c4'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ab39b8
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_2_inc
m30000| Thu Jun 14 01:33:51 [conn7] build index test.tmp.mr.mr_2_inc { 0: 1 }
m30000| Thu Jun 14 01:33:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_2
m30000| Thu Jun 14 01:33:51 [conn7] build index test.tmp.mr.mr_2 { _id: 1 }
m30000| Thu Jun 14 01:33:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mrs.mr_1339652031_4
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_2
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_2
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_2_inc
m30999| Thu Jun 14 01:33:51 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd977bcb62151f2b25ed3c8'), ok: 1.0 }
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_4_inc
m30001| Thu Jun 14 01:33:51 [conn3] build index test.tmp.mr.mr_4_inc { 0: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_4
m30001| Thu Jun 14 01:33:51 [conn3] build index test.tmp.mr.mr_4 { _id: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mrs.mr_1339652031_4
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_4
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_4
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_4_inc
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_5
m30001| Thu Jun 14 01:33:51 [conn3] build index test.tmp.mr.mr_5 { _id: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.smr1_out
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_5
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_5
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_5
m30999| Thu Jun 14 01:33:51 [conn] MR with single shard output, NS= primary=shard0001:localhost:30001
m30000| Thu Jun 14 01:33:51 [conn6] CMD: drop test.tmp.mrs.mr_1339652031_4
m30001| Thu Jun 14 01:33:51 [conn4] CMD: drop test.tmp.mrs.mr_1339652031_4
{
"result" : "smr1_out",
"counts" : {
"input" : NumberLong(4),
"emit" : NumberLong(8),
"reduce" : NumberLong(5),
"output" : NumberLong(3)
},
"timeMillis" : 11,
"timing" : {
"shardProcessing" : 8,
"postProcessing" : 3
},
"shardCounts" : {
"localhost:30000" : {
"input" : 2,
"emit" : 4,
"reduce" : 2,
"output" : 2
},
"localhost:30001" : {
"input" : 2,
"emit" : 4,
"reduce" : 1,
"output" : 3
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(5),
"reduce" : NumberLong(2),
"output" : NumberLong(3)
}
},
"ok" : 1,
}
m30999| Thu Jun 14 01:33:51 [conn] DROP: test.smr1_out
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.smr1_out
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_6_inc
m30001| Thu Jun 14 01:33:51 [conn3] build index test.tmp.mr.mr_6_inc { 0: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_6
m30001| Thu Jun 14 01:33:51 [conn3] build index test.tmp.mr.mr_6 { _id: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mrs.mr_1339652031_5
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_6
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_6
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_6_inc
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_3_inc
m30000| Thu Jun 14 01:33:51 [conn7] build index test.tmp.mr.mr_3_inc { 0: 1 }
m30000| Thu Jun 14 01:33:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_3
m30000| Thu Jun 14 01:33:51 [conn7] build index test.tmp.mr.mr_3 { _id: 1 }
m30000| Thu Jun 14 01:33:51 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mrs.mr_1339652031_5
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_3
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_3
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_3_inc
m30999| Thu Jun 14 01:33:51 [conn] MR with single shard output, NS=test. primary=shard0001:localhost:30001
m30000| Thu Jun 14 01:33:51 [conn6] CMD: drop test.tmp.mrs.mr_1339652031_5
m30001| Thu Jun 14 01:33:51 [conn4] CMD: drop test.tmp.mrs.mr_1339652031_5
{
"results" : [
{
"_id" : "a",
"value" : {
"count" : 2
}
},
{
"_id" : "b",
"value" : {
"count" : 3
}
},
{
"_id" : "c",
"value" : {
"count" : 3
}
}
],
"counts" : {
"input" : NumberLong(4),
"emit" : NumberLong(8),
"reduce" : NumberLong(5),
"output" : NumberLong(3)
},
"timeMillis" : 5,
"timing" : {
"shardProcessing" : 3,
"postProcessing" : 1
},
"shardCounts" : {
"localhost:30000" : {
"input" : 2,
"emit" : 4,
"reduce" : 2,
"output" : 2
},
"localhost:30001" : {
"input" : 2,
"emit" : 4,
"reduce" : 1,
"output" : 3
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(5),
"reduce" : NumberLong(2),
"output" : NumberLong(3)
}
},
"ok" : 1,
}
{ "a" : 2, "b" : 3, "c" : 3 }
m30001| Thu Jun 14 01:33:51 [conn3] JS Error: SyntaxError: syntax error nofile_a:0
m30001| Thu Jun 14 01:33:51 [conn3] mr failed, removing collection :: caused by :: 13598 couldn't compile code for: _map
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_7
m30001| Thu Jun 14 01:33:51 [conn3] CMD: drop test.tmp.mr.mr_7_inc
m30000| Thu Jun 14 01:33:51 [conn7] JS Error: SyntaxError: syntax error nofile_a:0
m30000| Thu Jun 14 01:33:51 [conn7] mr failed, removing collection :: caused by :: 13598 couldn't compile code for: _map
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_4
m30000| Thu Jun 14 01:33:51 [conn7] CMD: drop test.tmp.mr.mr_4_inc
m30000| Thu Jun 14 01:33:51 [conn6] CMD: drop test.tmp.mrs.mr_1339652031_6
m30001| Thu Jun 14 01:33:51 [conn4] CMD: drop test.tmp.mrs.mr_1339652031_6
m30000| Thu Jun 14 01:33:51 [conn1] JS Error: SyntaxError: syntax error nofile_a:0
m30000| Thu Jun 14 01:33:51 [conn1] mr failed, removing collection :: caused by :: 13598 couldn't compile code for: _map
m30000| Thu Jun 14 01:33:51 [conn1] CMD: drop test.tmp.mr.mr_5
m30000| Thu Jun 14 01:33:51 [conn1] CMD: drop test.tmp.mr.mr_5_inc
{
"ok" : 0,
"errmsg" : "MR parallel processing failed: { errmsg: \"exception: couldn't compile code for: _map\", code: 13598, ok: 0.0 }"
}
{
"errmsg" : "exception: couldn't compile code for: _map",
"code" : 13598,
"ok" : 0
}
m30001| Thu Jun 14 01:33:51 [conn3] build index test.countaa { _id: 1 }
m30001| Thu Jun 14 01:33:51 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:51 [conn] User Assertion: 10038:forced error
m30999| Thu Jun 14 01:33:51 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:33:51 [conn4] end connection 127.0.0.1:51332 (12 connections now open)
m30000| Thu Jun 14 01:33:51 [conn3] end connection 127.0.0.1:51329 (12 connections now open)
m30000| Thu Jun 14 01:33:51 [conn6] end connection 127.0.0.1:51334 (10 connections now open)
m30000| Thu Jun 14 01:33:51 [conn7] end connection 127.0.0.1:51337 (9 connections now open)
m30001| Thu Jun 14 01:33:51 [conn3] end connection 127.0.0.1:42745 (5 connections now open)
m30001| Thu Jun 14 01:33:51 [conn4] end connection 127.0.0.1:42746 (4 connections now open)
Thu Jun 14 01:33:52 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:33:52 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:33:52 [interruptThread] now exiting
m30000| Thu Jun 14 01:33:52 dbexit:
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:33:52 [interruptThread] closing listening socket: 22
m30000| Thu Jun 14 01:33:52 [interruptThread] closing listening socket: 23
m30000| Thu Jun 14 01:33:52 [interruptThread] closing listening socket: 24
m30000| Thu Jun 14 01:33:52 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:33:52 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:52 [conn5] end connection 127.0.0.1:42750 (3 connections now open)
m30000| Thu Jun 14 01:33:52 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:33:52 [conn13] end connection 127.0.0.1:51347 (8 connections now open)
m30000| Thu Jun 14 01:33:52 [conn11] end connection 127.0.0.1:51344 (7 connections now open)
m30000| Thu Jun 14 01:33:52 dbexit: really exiting now
Thu Jun 14 01:33:53 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:33:53 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:33:53 [interruptThread] now exiting
m30001| Thu Jun 14 01:33:53 dbexit:
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:33:53 [interruptThread] closing listening socket: 25
m30001| Thu Jun 14 01:33:53 [interruptThread] closing listening socket: 26
m30001| Thu Jun 14 01:33:53 [interruptThread] closing listening socket: 27
m30001| Thu Jun 14 01:33:53 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:33:53 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:33:53 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:33:53 [conn6] end connection 127.0.0.1:42753 (2 connections now open)
m30001| Thu Jun 14 01:33:53 dbexit: really exiting now
Thu Jun 14 01:33:54 shell: stopped mongo program on port 30001
*** ShardingTest features2 completed successfully in 9.422 seconds ***
9468.735933ms
Thu Jun 14 01:33:54 [initandlisten] connection accepted from 127.0.0.1:54851 #27 (14 connections now open)
*******************************************
Test : features3.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/features3.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/features3.js";TestData.testFile = "features3.js";TestData.testName = "features3";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:33:54 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/features30'
Thu Jun 14 01:33:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/features30
m30000| Thu Jun 14 01:33:54
m30000| Thu Jun 14 01:33:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:33:54
m30000| Thu Jun 14 01:33:54 [initandlisten] MongoDB starting : pid=24585 port=30000 dbpath=/data/db/features30 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:33:54 [initandlisten]
m30000| Thu Jun 14 01:33:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:33:54 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:33:54 [initandlisten]
m30000| Thu Jun 14 01:33:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:33:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:33:54 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:33:54 [initandlisten]
m30000| Thu Jun 14 01:33:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:33:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:33:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:33:54 [initandlisten] options: { dbpath: "/data/db/features30", port: 30000 }
m30000| Thu Jun 14 01:33:54 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:33:54 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/features31'
m30000| Thu Jun 14 01:33:54 [initandlisten] connection accepted from 127.0.0.1:51350 #1 (1 connection now open)
Thu Jun 14 01:33:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/features31
m30001| Thu Jun 14 01:33:54
m30001| Thu Jun 14 01:33:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:33:54
m30001| Thu Jun 14 01:33:54 [initandlisten] MongoDB starting : pid=24598 port=30001 dbpath=/data/db/features31 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:33:54 [initandlisten]
m30001| Thu Jun 14 01:33:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:33:54 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:33:54 [initandlisten]
m30001| Thu Jun 14 01:33:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:33:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:33:54 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:33:54 [initandlisten]
m30001| Thu Jun 14 01:33:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:33:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:33:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:33:54 [initandlisten] options: { dbpath: "/data/db/features31", port: 30001 }
m30001| Thu Jun 14 01:33:54 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:33:54 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30000| Thu Jun 14 01:33:54 [initandlisten] connection accepted from 127.0.0.1:51353 #2 (2 connections now open)
ShardingTest features3 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:33:54 [FileAllocator] allocating new datafile /data/db/features30/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:54 [FileAllocator] creating directory /data/db/features30/_tmp
m30001| Thu Jun 14 01:33:54 [initandlisten] connection accepted from 127.0.0.1:42759 #1 (1 connection now open)
Thu Jun 14 01:33:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30999| Thu Jun 14 01:33:54 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:33:54 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24612 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:33:54 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:33:54 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:33:54 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:33:54 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:33:54 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:54 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:54 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:54 [initandlisten] connection accepted from 127.0.0.1:51355 #3 (3 connections now open)
m30000| Thu Jun 14 01:33:54 [FileAllocator] done allocating datafile /data/db/features30/config.ns, size: 16MB, took 0.258 secs
m30000| Thu Jun 14 01:33:54 [FileAllocator] allocating new datafile /data/db/features30/config.0, filling with zeroes...
m30000| Thu Jun 14 01:33:55 [FileAllocator] done allocating datafile /data/db/features30/config.0, size: 16MB, took 0.262 secs
m30000| Thu Jun 14 01:33:55 [FileAllocator] allocating new datafile /data/db/features30/config.1, filling with zeroes...
m30000| Thu Jun 14 01:33:55 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn2] insert config.settings keyUpdates:0 locks(micros) w:538241 538ms
m30999| Thu Jun 14 01:33:55 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:55 [initandlisten] connection accepted from 127.0.0.1:51358 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:55 [mongosMain] connected connection!
m30000| Thu Jun 14 01:33:55 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [mongosMain] MaxChunkSize: 50
m30000| Thu Jun 14 01:33:55 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:33:55 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:33:55 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:55 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:33:55 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:33:55 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:33:55 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:33:55
m30999| Thu Jun 14 01:33:55 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:33:55 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:55 [initandlisten] connection accepted from 127.0.0.1:51359 #5 (5 connections now open)
m30999| Thu Jun 14 01:33:55 [Balancer] connected connection!
m30999| Thu Jun 14 01:33:55 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:33:55 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652035:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:33:55 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:55 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:33:55 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652035:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:33:55 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:33:55 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:33:55 [Balancer] inserting initial doc in config.locks for lock balancer
m30000| Thu Jun 14 01:33:55 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652035:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652035:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652035:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:33:55 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977c30fa1a8e12dacc628" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:33:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652035:1804289383' acquired, ts : 4fd977c30fa1a8e12dacc628
m30999| Thu Jun 14 01:33:55 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:33:55 [Balancer] no collections to balance
m30999| Thu Jun 14 01:33:55 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:33:55 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:33:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652035:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:33:55 [mongosMain] connection accepted from 127.0.0.1:43339 #1 (1 connection now open)
m30999| Thu Jun 14 01:33:55 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:33:55 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:33:55 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:33:55 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:55 [conn] connected connection!
m30999| Thu Jun 14 01:33:55 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:33:55 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:33:55 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:33:55 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:33:55 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:33:55 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:33:55 [conn] enable sharding on: test.foo with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:33:55 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd977c30fa1a8e12dacc629
m30999| Thu Jun 14 01:33:55 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd977c30fa1a8e12dacc629 based on: (empty)
m30000| Thu Jun 14 01:33:55 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:33:55 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:33:55 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:33:55 [initandlisten] connection accepted from 127.0.0.1:51362 #6 (6 connections now open)
m30999| Thu Jun 14 01:33:55 [conn] connected connection!
m30999| Thu Jun 14 01:33:55 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977c30fa1a8e12dacc627
m30999| Thu Jun 14 01:33:55 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:33:55 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:33:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0000", shardHost: "localhost:30000" } 0x8bdd758
m30999| Thu Jun 14 01:33:55 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:55 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:55 [conn] connected connection!
m30999| Thu Jun 14 01:33:55 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977c30fa1a8e12dacc627
m30999| Thu Jun 14 01:33:55 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:33:55 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:33:55 [initandlisten] connection accepted from 127.0.0.1:42768 #2 (2 connections now open)
m30001| Thu Jun 14 01:33:55 [FileAllocator] allocating new datafile /data/db/features31/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:33:55 [FileAllocator] creating directory /data/db/features31/_tmp
m30001| Thu Jun 14 01:33:55 [initandlisten] connection accepted from 127.0.0.1:42770 #3 (3 connections now open)
m30000| Thu Jun 14 01:33:55 [FileAllocator] done allocating datafile /data/db/features30/config.1, size: 32MB, took 0.569 secs
m30001| Thu Jun 14 01:33:56 [FileAllocator] done allocating datafile /data/db/features31/test.ns, size: 16MB, took 0.38 secs
m30001| Thu Jun 14 01:33:56 [FileAllocator] allocating new datafile /data/db/features31/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:56 [FileAllocator] done allocating datafile /data/db/features31/test.0, size: 16MB, took 0.249 secs
m30001| Thu Jun 14 01:33:56 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:33:56 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:33:56 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:33:56 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:7 W:71 r:272 w:1117491 1117ms
m30001| Thu Jun 14 01:33:56 [FileAllocator] allocating new datafile /data/db/features31/test.1, filling with zeroes...
m30001| Thu Jun 14 01:33:56 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977c30fa1a8e12dacc627'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:112 reslen:51 1114ms
m30001| Thu Jun 14 01:33:56 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:33:56 [initandlisten] connection accepted from 127.0.0.1:51364 #7 (7 connections now open)
m30999| Thu Jun 14 01:33:56 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0001", shardHost: "localhost:30001" } 0x8bdc978
m30999| Thu Jun 14 01:33:56 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:56 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8bdc978
m30999| Thu Jun 14 01:33:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:56 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30999| Thu Jun 14 01:33:56 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:33:56 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:33:56 [initandlisten] connection accepted from 127.0.0.1:42772 #4 (4 connections now open)
m30999| Thu Jun 14 01:33:56 [conn] connected connection!
m30000| Thu Jun 14 01:33:56 [initandlisten] connection accepted from 127.0.0.1:51366 #8 (8 connections now open)
m30001| Thu Jun 14 01:33:56 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 5000.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:56 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652036:1687417857' acquired, ts : 4fd977c4e2aeee914123315f
m30001| Thu Jun 14 01:33:56 [conn4] splitChunk accepted at version 1|0||4fd977c30fa1a8e12dacc629
m30001| Thu Jun 14 01:33:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:56-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42772", time: new Date(1339652036387), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 5000.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977c30fa1a8e12dacc629') }, right: { min: { _id: 5000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977c30fa1a8e12dacc629') } } }
m30001| Thu Jun 14 01:33:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652036:1687417857' unlocked.
m30001| Thu Jun 14 01:33:56 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652036:1687417857 (sleeping for 30000ms)
m30999| Thu Jun 14 01:33:56 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd977c30fa1a8e12dacc629 based on: 1|0||4fd977c30fa1a8e12dacc629
m30999| Thu Jun 14 01:33:56 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 3.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:33:56 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 5000.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:33:56 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 5000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:33:56 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:33:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652036:1687417857' acquired, ts : 4fd977c4e2aeee9141233160
m30001| Thu Jun 14 01:33:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:56-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42772", time: new Date(1339652036392), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 5000.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:56 [conn4] moveChunk request accepted at version 1|2||4fd977c30fa1a8e12dacc629
m30001| Thu Jun 14 01:33:56 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:33:56 [initandlisten] connection accepted from 127.0.0.1:42774 #5 (5 connections now open)
m30000| Thu Jun 14 01:33:56 [FileAllocator] allocating new datafile /data/db/features30/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:33:56 [FileAllocator] done allocating datafile /data/db/features30/test.ns, size: 16MB, took 0.537 secs
m30000| Thu Jun 14 01:33:56 [FileAllocator] allocating new datafile /data/db/features30/test.0, filling with zeroes...
m30001| Thu Jun 14 01:33:57 [FileAllocator] done allocating datafile /data/db/features31/test.1, size: 32MB, took 0.862 secs
m30001| Thu Jun 14 01:33:57 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 5000.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:33:57 [FileAllocator] done allocating datafile /data/db/features30/test.0, size: 16MB, took 0.552 secs
m30000| Thu Jun 14 01:33:57 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:33:57 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:33:57 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:33:57 [FileAllocator] allocating new datafile /data/db/features30/test.1, filling with zeroes...
m30000| Thu Jun 14 01:33:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 5000.0 }
m30000| Thu Jun 14 01:33:58 [FileAllocator] done allocating datafile /data/db/features30/test.1, size: 32MB, took 0.586 secs
m30001| Thu Jun 14 01:33:58 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 5000.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:33:58 [conn4] moveChunk setting version to: 2|0||4fd977c30fa1a8e12dacc629
m30000| Thu Jun 14 01:33:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 5000.0 }
m30000| Thu Jun 14 01:33:58 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:58-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652038407), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 5000.0 }, step1 of 5: 1100, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 891 } }
m30000| Thu Jun 14 01:33:58 [initandlisten] connection accepted from 127.0.0.1:51368 #9 (9 connections now open)
m30001| Thu Jun 14 01:33:58 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 5000.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:33:58 [conn4] moveChunk updating self version to: 2|1||4fd977c30fa1a8e12dacc629 through { _id: 5000.0 } -> { _id: MaxKey } for collection 'test.foo'
m30001| Thu Jun 14 01:33:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:58-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42772", time: new Date(1339652038411), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 5000.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:33:58 [conn4] doing delete inline
m30001| Thu Jun 14 01:33:58 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:33:58 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652036:1687417857' unlocked.
m30001| Thu Jun 14 01:33:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:33:58-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42772", time: new Date(1339652038412), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 5000.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2005, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:33:58 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 5000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:69 w:43 reslen:37 2020ms
m30999| Thu Jun 14 01:33:58 [conn] moveChunk result: { ok: 1.0 }
m30000| Thu Jun 14 01:33:58 [conn6] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:33:58 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||4fd977c30fa1a8e12dacc629 based on: 1|2||4fd977c30fa1a8e12dacc629
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0000", shardHost: "localhost:30000" } 0x8bdd758
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8bdd758
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:58 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: MinKey } max: { _id: 5000.0 } dataWritten: 8083660 splitThreshold: 471859
m30999| Thu Jun 14 01:33:58 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0001", shardHost: "localhost:30001" } 0x8bdc978
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), ok: 1.0 }
m30999| Thu Jun 14 01:33:58 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { _id: 5000.0 } max: { _id: MaxKey } dataWritten: 8312765 splitThreshold: 471859
m30999| Thu Jun 14 01:33:58 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:33:58 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:33:58 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:33:58 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:33:58 [conn] warning: mongos collstats doesn't know about: userFlags
m30001| Thu Jun 14 01:33:58 [conn3] build index test.bar { _id: 1 }
m30001| Thu Jun 14 01:33:58 [conn3] build index done. scanned 0 total records. 0 secs
about to fork shell: Thu Jun 14 2012 01:33:58 GMT-0400 (EDT)
Thu Jun 14 01:33:58 shell: started program /mnt/slaves/Linux_32bit/mongo/mongo --eval TestData = {
"testPath" : "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/features3.js",
"testFile" : "features3.js",
"testName" : "features3",
"noJournal" : false,
"noJournalPrealloc" : false,
"auth" : false,
"keyFile" : null,
"keyFileData" : null
};jsTest.authenticate(db.getMongo());try { while(true){ db.foo.find( function(){ x = ''; for ( i=0; i<10000; i++ ){ x+=i; } sleep( 1000 ); return true; } ).itcount() }} catch(e){ print('PShell execution ended:'); printjson( e ) } localhost:30999
after forking shell: Thu Jun 14 2012 01:33:58 GMT-0400 (EDT)
sh24652| MongoDB shell version: 2.1.2-pre-
sh24652| connecting to: localhost:30999/test
m30999| Thu Jun 14 01:33:58 [mongosMain] connection accepted from 127.0.0.1:43348 #2 (2 connections now open)
m30999| Thu Jun 14 01:33:58 [conn] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:33:58 [initandlisten] connection accepted from 127.0.0.1:51370 #10 (10 connections now open)
m30999| Thu Jun 14 01:33:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:58 [conn] connected connection!
m30999| Thu Jun 14 01:33:58 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0000", shardHost: "localhost:30000" } 0x8be1300
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:33:58 [conn] creating new connection to:localhost:30001
m30001| Thu Jun 14 01:33:58 [initandlisten] connection accepted from 127.0.0.1:42778 #6 (6 connections now open)
m30999| Thu Jun 14 01:33:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:33:58 [conn] connected connection!
m30999| Thu Jun 14 01:33:58 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977c30fa1a8e12dacc629'), serverID: ObjectId('4fd977c30fa1a8e12dacc627'), shard: "shard0001", shardHost: "localhost:30001" } 0x8be2690
m30999| Thu Jun 14 01:33:58 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
{ "op" : "shard0000:5162", "shard" : "shard0000", "shardid" : 5162 }
m30999| Thu Jun 14 01:33:59 [conn] want to kill op: op: "shard0000:5162"
m30999| Thu Jun 14 01:33:59 [conn] want to kill op: op: "shard0001:5102"
m30000| Thu Jun 14 01:33:59 [conn5] going to kill op: op: 5162
m30001| Thu Jun 14 01:33:59 [conn4] going to kill op: op: 5102
{ "op" : "shard0001:5102", "shard" : "shard0001", "shardid" : 5102 }
m30000| Thu Jun 14 01:33:59 [conn10] assertion 11601 operation was interrupted ns:test.foo query:{ $where: function () {
m30000| x = "";
m30000| for (i = 0; i < 10000; i++) {
m30000| x ... }
m30000| Thu Jun 14 01:33:59 [conn10] { $err: "operation was interrupted", code: 11601 }
m30000| Thu Jun 14 01:33:59 [conn10] query test.foo query: { $where: function () {
m30000| x = "";
m30000| for (i = 0; i < 10000; i++) {
m30000| x ... } ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:1032274 reslen:71 1032ms
m30999| Thu Jun 14 01:33:59 [conn] warning: db exception when finishing on shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 2|1||4fd977c30fa1a8e12dacc629", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 11601 operation was interrupted
m30001| Thu Jun 14 01:33:59 [conn6] assertion 11601 operation was interrupted ns:test.foo query:{ $where: function () {
m30001| x = "";
m30001| for (i = 0; i < 10000; i++) {
m30001| x ... }
m30001| Thu Jun 14 01:33:59 [conn6] { $err: "operation was interrupted", code: 11601 }
m30001| Thu Jun 14 01:33:59 [conn6] query test.foo query: { $where: function () {
m30001| x = "";
m30001| for (i = 0; i < 10000; i++) {
m30001| x ... } ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:1031125 reslen:71 1031ms
m30999| Thu Jun 14 01:33:59 [conn] AssertionException while processing op type : 2004 to : test.foo :: caused by :: 11601 operation was interrupted
m30000| Thu Jun 14 01:33:59 [conn10] end connection 127.0.0.1:51370 (9 connections now open)
m30999| Thu Jun 14 01:33:59 [conn] end connection 127.0.0.1:43348 (1 connection now open)
sh24652| PShell execution ended:
sh24652| "error: { \"$err\" : \"operation was interrupted\", \"code\" : 11601 }"
after loop: Thu Jun 14 2012 01:34:00 GMT-0400 (EDT)
killTime: 1019
elapsed: 1289
m30000| Thu Jun 14 01:34:00 [conn5] CMD fsync: sync:1 lock:0
m30001| Thu Jun 14 01:34:00 [conn4] CMD fsync: sync:1 lock:0
---------------
---------------
{
"opid" : "shard0000:83",
"active" : true,
"secs_running" : 4,
"op" : "query",
"ns" : "",
"query" : {
"writebacklisten" : ObjectId("4fd977c30fa1a8e12dacc627")
},
"client_s" : "127.0.0.1:51358",
"desc" : "conn4",
"threadId" : "0xb080fb90",
"connectionId" : 4,
"waitingForLock" : false,
"numYields" : 0,
"lockStatMillis" : {
"timeLocked" : {
"R" : NumberLong(6),
"W" : NumberLong(0),
"r" : NumberLong(1327),
"w" : NumberLong(1595)
},
"timeAcquiring" : {
"R" : NumberLong(2),
"W" : NumberLong(0),
"r" : NumberLong(46),
"w" : NumberLong(16)
}
}
}
{
"opid" : "shard0001:11",
"active" : true,
"secs_running" : 3,
"op" : "query",
"ns" : "",
"query" : {
"writebacklisten" : ObjectId("4fd977c30fa1a8e12dacc627")
},
"client_s" : "127.0.0.1:42768",
"desc" : "conn2",
"threadId" : "0xb28f8b90",
"connectionId" : 2,
"waitingForLock" : false,
"numYields" : 0,
"lockStatMillis" : {
"timeLocked" : {
"R" : NumberLong(7),
"W" : NumberLong(71),
"r" : NumberLong(272),
"w" : NumberLong(1117491)
},
"timeAcquiring" : {
"R" : NumberLong(1),
"W" : NumberLong(1),
"r" : NumberLong(19),
"w" : NumberLong(3)
}
}
}
m30999| Thu Jun 14 01:34:00 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:34:00 [conn3] end connection 127.0.0.1:51355 (8 connections now open)
m30000| Thu Jun 14 01:34:00 [conn6] end connection 127.0.0.1:51362 (7 connections now open)
m30000| Thu Jun 14 01:34:00 [conn5] end connection 127.0.0.1:51359 (7 connections now open)
m30001| Thu Jun 14 01:34:00 [conn3] end connection 127.0.0.1:42770 (5 connections now open)
m30001| Thu Jun 14 01:34:00 [conn4] end connection 127.0.0.1:42772 (4 connections now open)
m30001| Thu Jun 14 01:34:00 [conn6] end connection 127.0.0.1:42778 (3 connections now open)
Thu Jun 14 01:34:01 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:01 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:01 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:01 dbexit:
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:01 [interruptThread] closing listening socket: 23
m30000| Thu Jun 14 01:34:01 [interruptThread] closing listening socket: 24
m30000| Thu Jun 14 01:34:01 [interruptThread] closing listening socket: 25
m30000| Thu Jun 14 01:34:01 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:01 [conn5] end connection 127.0.0.1:42774 (2 connections now open)
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:01 [conn9] end connection 127.0.0.1:51368 (5 connections now open)
m30000| Thu Jun 14 01:34:01 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:34:01 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:34:01 dbexit: really exiting now
Thu Jun 14 01:34:02 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:02 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:02 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:02 dbexit:
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:02 [interruptThread] closing listening socket: 26
m30001| Thu Jun 14 01:34:02 [interruptThread] closing listening socket: 27
m30001| Thu Jun 14 01:34:02 [interruptThread] closing listening socket: 28
m30001| Thu Jun 14 01:34:02 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:34:02 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:02 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:02 dbexit: really exiting now
Thu Jun 14 01:34:03 shell: stopped mongo program on port 30001
*** ShardingTest features3 completed successfully in 9.049 seconds ***
9105.698824ms
Thu Jun 14 01:34:03 [initandlisten] connection accepted from 127.0.0.1:54875 #28 (15 connections now open)
*******************************************
Test : findandmodify1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/findandmodify1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/findandmodify1.js";TestData.testFile = "findandmodify1.js";TestData.testName = "findandmodify1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:34:03 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/find_and_modify_sharded0'
Thu Jun 14 01:34:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/find_and_modify_sharded0
m30000| Thu Jun 14 01:34:03
m30000| Thu Jun 14 01:34:03 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:34:03
m30000| Thu Jun 14 01:34:03 [initandlisten] MongoDB starting : pid=24664 port=30000 dbpath=/data/db/find_and_modify_sharded0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:34:03 [initandlisten]
m30000| Thu Jun 14 01:34:03 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:34:03 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:34:03 [initandlisten]
m30000| Thu Jun 14 01:34:03 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:34:03 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:34:03 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:34:03 [initandlisten]
m30000| Thu Jun 14 01:34:03 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:34:03 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:03 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:34:03 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded0", port: 30000 }
m30000| Thu Jun 14 01:34:03 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:34:03 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/find_and_modify_sharded1'
m30000| Thu Jun 14 01:34:03 [initandlisten] connection accepted from 127.0.0.1:51374 #1 (1 connection now open)
Thu Jun 14 01:34:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/find_and_modify_sharded1
m30001| Thu Jun 14 01:34:03
m30001| Thu Jun 14 01:34:03 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:34:03
m30001| Thu Jun 14 01:34:03 [initandlisten] MongoDB starting : pid=24677 port=30001 dbpath=/data/db/find_and_modify_sharded1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:34:03 [initandlisten]
m30001| Thu Jun 14 01:34:03 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:34:03 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:34:03 [initandlisten]
m30001| Thu Jun 14 01:34:03 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:34:03 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:34:03 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:34:03 [initandlisten]
m30001| Thu Jun 14 01:34:03 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:34:03 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:34:03 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:34:03 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded1", port: 30001 }
m30001| Thu Jun 14 01:34:03 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:34:03 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:34:03 [initandlisten] connection accepted from 127.0.0.1:42783 #1 (1 connection now open)
m30000| Thu Jun 14 01:34:03 [initandlisten] connection accepted from 127.0.0.1:51377 #2 (2 connections now open)
ShardingTest find_and_modify_sharded :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:34:03 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/config.ns, filling with zeroes...
Thu Jun 14 01:34:03 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -vv
m30000| Thu Jun 14 01:34:03 [FileAllocator] creating directory /data/db/find_and_modify_sharded0/_tmp
m30999| Thu Jun 14 01:34:03 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:34:03 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24692 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:34:03 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:34:03 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:34:03 [mongosMain] options: { configdb: "localhost:30000", port: 30999, vv: true }
m30999| Thu Jun 14 01:34:03 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:34:03 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:03 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:03 [mongosMain] connected connection!
m30000| Thu Jun 14 01:34:03 [initandlisten] connection accepted from 127.0.0.1:51379 #3 (3 connections now open)
m30000| Thu Jun 14 01:34:03 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/config.ns, size: 16MB, took 0.223 secs
m30000| Thu Jun 14 01:34:03 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:34:04 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/config.0, size: 16MB, took 0.255 secs
m30000| Thu Jun 14 01:34:04 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:34:04 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn2] insert config.settings keyUpdates:0 locks(micros) w:496085 495ms
m30000| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:51382 #4 (4 connections now open)
m30000| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:51383 #5 (5 connections now open)
m30000| Thu Jun 14 01:34:04 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:34:04 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:04 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:34:04 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:04 [mongosMain] connected connection!
m30999| Thu Jun 14 01:34:04 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:34:04 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:34:04 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:34:04 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:34:04 [websvr] admin web console waiting for connections on port 31999
m30000| Thu Jun 14 01:34:04 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:34:04 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:34:04 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:34:04 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:34:04 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:34:04 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:34:04 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:34:04
m30999| Thu Jun 14 01:34:04 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:51384 #6 (6 connections now open)
m30000| Thu Jun 14 01:34:04 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:04 [Balancer] connected connection!
m30999| Thu Jun 14 01:34:04 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:34:04 [Balancer] skew from remote server localhost:30000 found: -1
m30999| Thu Jun 14 01:34:04 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:34:04 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:34:04 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds.
m30000| Thu Jun 14 01:34:04 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:04 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652044:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:34:04 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:34:04 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652044:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652044:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652044:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:34:04 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977cce703b5476a9e2a09" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30000| Thu Jun 14 01:34:04 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:34:04 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:34:04 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:34:04 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652044:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:34:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652044:1804289383' acquired, ts : 4fd977cce703b5476a9e2a09
m30999| Thu Jun 14 01:34:04 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:34:04 [Balancer] no collections to balance
m30999| Thu Jun 14 01:34:04 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:34:04 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:34:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652044:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:34:04 [mongosMain] connection accepted from 127.0.0.1:43364 #1 (1 connection now open)
m30999| Thu Jun 14 01:34:04 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:34:04 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:34:04 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:34:04 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:04 [conn] connected connection!
m30001| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:42793 #2 (2 connections now open)
m30999| Thu Jun 14 01:34:04 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:34:04 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:34:04 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:34:04 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:04 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:51387 #7 (7 connections now open)
m30999| Thu Jun 14 01:34:04 [conn] connected connection!
m30999| Thu Jun 14 01:34:04 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977cce703b5476a9e2a08
m30999| Thu Jun 14 01:34:04 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:34:04 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977cce703b5476a9e2a08'), authoritative: true }
m30999| Thu Jun 14 01:34:04 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:04 [conn] connected connection!
m30999| Thu Jun 14 01:34:04 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977cce703b5476a9e2a08
m30999| Thu Jun 14 01:34:04 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:34:04 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977cce703b5476a9e2a08'), authoritative: true }
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30001| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:42795 #3 (3 connections now open)
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:04 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:34:04 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:34:04 [initandlisten] connection accepted from 127.0.0.1:42796 #4 (4 connections now open)
m30999| Thu Jun 14 01:34:04 [conn] connected connection!
m30999| Thu Jun 14 01:34:04 [conn] CMD: shardcollection: { shardcollection: "test.stuff", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:04 [conn] enable sharding on: test.stuff with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:04 [conn] going to create 1 chunk(s) for: test.stuff using new epoch 4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:04 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:34:04 [FileAllocator] creating directory /data/db/find_and_modify_sharded1/_tmp
m30999| Thu Jun 14 01:34:04 [conn] loaded 1 chunks into new chunk manager for test.stuff with version 1|0||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:04 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 2 version: 1|0||4fd977cce703b5476a9e2a0a based on: (empty)
m30000| Thu Jun 14 01:34:04 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:34:04 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:04 [conn] resetting shard version of test.stuff on localhost:30000, version is zero
m30999| Thu Jun 14 01:34:04 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x9179888
m30999| Thu Jun 14 01:34:04 [conn] setShardVersion shard0000 localhost:30000 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0000", shardHost: "localhost:30000" } 0x9175810
m30999| Thu Jun 14 01:34:04 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:04 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff my last seq: 0 current: 2 version: 1|0||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:04 [conn] setShardVersion shard0001 localhost:30001 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0001", shardHost: "localhost:30001" } 0x9175ca8
m30000| Thu Jun 14 01:34:04 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/config.1, size: 32MB, took 0.627 secs
m30001| Thu Jun 14 01:34:05 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded1/test.ns, size: 16MB, took 0.375 secs
m30001| Thu Jun 14 01:34:05 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:05 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded1/test.0, size: 16MB, took 0.274 secs
m30001| Thu Jun 14 01:34:05 [conn4] build index test.stuff { _id: 1 }
m30001| Thu Jun 14 01:34:05 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:05 [conn4] info: creating collection test.stuff on add index
m30001| Thu Jun 14 01:34:05 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:88 r:243 w:1150446 1150ms
m30001| Thu Jun 14 01:34:05 [conn3] command admin.$cmd command: { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:62 reslen:177 1148ms
m30001| Thu Jun 14 01:34:05 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:34:05 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded1/test.1, filling with zeroes...
m30999| Thu Jun 14 01:34:05 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff", need_authoritative: true, errmsg: "first time for collection 'test.stuff'", ok: 0.0 }
m30999| Thu Jun 14 01:34:05 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff my last seq: 0 current: 2 version: 1|0||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:05 [conn] setShardVersion shard0001 localhost:30001 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9175ca8
m30000| Thu Jun 14 01:34:05 [initandlisten] connection accepted from 127.0.0.1:51390 #8 (8 connections now open)
m30999| Thu Jun 14 01:34:05 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:05 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:34:05 [initandlisten] connection accepted from 127.0.0.1:51391 #9 (9 connections now open)
m30000| Thu Jun 14 01:34:05 [initandlisten] connection accepted from 127.0.0.1:51392 #10 (10 connections now open)
m30001| Thu Jun 14 01:34:05 [conn4] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 10.0 } ], shardId: "test.stuff-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:05 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:05 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cd5f0eb85d6d9da13d
m30001| Thu Jun 14 01:34:05 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652045:126706824 (sleeping for 30000ms)
m30001| Thu Jun 14 01:34:05 [conn4] splitChunk accepted at version 1|0||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:05-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652045584), what: "split", ns: "test.stuff", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 10.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30001| Thu Jun 14 01:34:05 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30999| Thu Jun 14 01:34:05 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 1|0||4fd977cce703b5476a9e2a0a and 1 chunks
m30999| Thu Jun 14 01:34:05 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 1|2||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:05 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 3 version: 1|2||4fd977cce703b5476a9e2a0a based on: 1|0||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:05 [conn] CMD: movechunk: { movechunk: "test.stuff", find: { _id: 10.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:34:05 [conn] moving chunk ns: test.stuff moving ( ns:test.stuff at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 10.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:05 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.stuff-_id_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:05 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:05 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cd5f0eb85d6d9da13e
m30001| Thu Jun 14 01:34:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:05-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652045589), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:05 [conn4] moveChunk request accepted at version 1|2||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:05 [conn4] moveChunk number of documents: 0
m30000| Thu Jun 14 01:34:05 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:34:05 [initandlisten] connection accepted from 127.0.0.1:42800 #5 (5 connections now open)
m30000| Thu Jun 14 01:34:06 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/test.ns, size: 16MB, took 0.792 secs
m30000| Thu Jun 14 01:34:06 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:06 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded1/test.1, size: 32MB, took 0.909 secs
m30001| Thu Jun 14 01:34:06 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: 10.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:34:06 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/test.0, size: 16MB, took 0.353 secs
m30000| Thu Jun 14 01:34:06 [migrateThread] build index test.stuff { _id: 1 }
m30000| Thu Jun 14 01:34:06 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:06 [migrateThread] info: creating collection test.stuff on add index
m30000| Thu Jun 14 01:34:06 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:34:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: 10.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded0/test.1, size: 32MB, took 0.548 secs
m30001| Thu Jun 14 01:34:07 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: 10.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:07 [conn4] moveChunk setting version to: 2|0||4fd977cce703b5476a9e2a0a
m30000| Thu Jun 14 01:34:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: 10.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652047607), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: 10.0 }, max: { _id: MaxKey }, step1 of 5: 1162, step2 of 5: 0, step3 of 5: 27, step4 of 5: 0, step5 of 5: 828 } }
m30000| Thu Jun 14 01:34:07 [initandlisten] connection accepted from 127.0.0.1:51394 #11 (11 connections now open)
m30001| Thu Jun 14 01:34:07 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: 10.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:07 [conn4] moveChunk updating self version to: 2|1||4fd977cce703b5476a9e2a0a through { _id: MinKey } -> { _id: 10.0 } for collection 'test.stuff'
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047612), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:07 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:07 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047613), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: 10.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:07 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.stuff-_id_10.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:88 r:309 w:1150491 reslen:37 2024ms
m30999| Thu Jun 14 01:34:07 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 1|2||4fd977cce703b5476a9e2a0a and 2 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|1||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 4 version: 2|1||4fd977cce703b5476a9e2a0a based on: 1|2||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff my last seq: 2 current: 4 version: 2|1||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion shard0001 localhost:30001 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0001", shardHost: "localhost:30001" } 0x9175ca8
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), ok: 1.0 }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 10.0 } dataWritten: 8312765 splitThreshold: 471859
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff my last seq: 2 current: 4 version: 2|0||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion shard0000 localhost:30000 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0000", shardHost: "localhost:30000" } 0x9175810
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff", need_authoritative: true, errmsg: "first time for collection 'test.stuff'", ok: 0.0 }
m30999| Thu Jun 14 01:34:07 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff my last seq: 2 current: 4 version: 2|0||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion shard0000 localhost:30000 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9175810
m30000| Thu Jun 14 01:34:07 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 10.0 } max: { _id: MaxKey } dataWritten: 8083660 splitThreshold: 471859
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 10.0 }
m30001| Thu Jun 14 01:34:07 [conn4] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 10.0 }, from: "shard0001", splitKeys: [ { _id: 2.0 } ], shardId: "test.stuff-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:07 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cf5f0eb85d6d9da13f
m30001| Thu Jun 14 01:34:07 [conn4] splitChunk accepted at version 2|1||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047620), what: "split", ns: "test.stuff", details: { before: { min: { _id: MinKey }, max: { _id: 10.0 }, lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 2.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|1||4fd977cce703b5476a9e2a0a and 2 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|3||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 5 version: 2|3||4fd977cce703b5476a9e2a0a based on: 2|1||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: 2.0 } max: { _id: 10.0 }
m30001| Thu Jun 14 01:34:07 [conn4] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: 10.0 }, from: "shard0001", splitKeys: [ { _id: 4.0 } ], shardId: "test.stuff-_id_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:07 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cf5f0eb85d6d9da140
m30001| Thu Jun 14 01:34:07 [conn4] splitChunk accepted at version 2|3||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047624), what: "split", ns: "test.stuff", details: { before: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 4.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|3||4fd977cce703b5476a9e2a0a and 3 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|5||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 6 version: 2|5||4fd977cce703b5476a9e2a0a based on: 2|3||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { _id: 4.0 } max: { _id: 10.0 }
m30001| Thu Jun 14 01:34:07 [conn4] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: 10.0 }, from: "shard0001", splitKeys: [ { _id: 6.0 } ], shardId: "test.stuff-_id_4.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:07 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cf5f0eb85d6d9da141
m30001| Thu Jun 14 01:34:07 [conn4] splitChunk accepted at version 2|5||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047627), what: "split", ns: "test.stuff", details: { before: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 6.0 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|5||4fd977cce703b5476a9e2a0a and 4 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|7||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 7 version: 2|7||4fd977cce703b5476a9e2a0a based on: 2|5||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { _id: 6.0 } max: { _id: 10.0 }
m30001| Thu Jun 14 01:34:07 [conn4] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: 10.0 }, from: "shard0001", splitKeys: [ { _id: 8.0 } ], shardId: "test.stuff-_id_6.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:07 [conn4] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' acquired, ts : 4fd977cf5f0eb85d6d9da142
m30001| Thu Jun 14 01:34:07 [conn4] splitChunk accepted at version 2|7||4fd977cce703b5476a9e2a0a
m30001| Thu Jun 14 01:34:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42796", time: new Date(1339652047631), what: "split", ns: "test.stuff", details: { before: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 8.0 }, lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 8.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30001| Thu Jun 14 01:34:07 [conn4] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30001:1339652045:126706824' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|7||4fd977cce703b5476a9e2a0a and 5 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|9||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 8 version: 2|9||4fd977cce703b5476a9e2a0a based on: 2|7||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 10.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [initandlisten] connection accepted from 127.0.0.1:51395 #12 (12 connections now open)
m30000| Thu Jun 14 01:34:07 [conn6] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], shardId: "test.stuff-_id_10.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:34:07 [conn6] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:07 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339652047:1652723532 (sleeping for 30000ms)
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' acquired, ts : 4fd977cf3ed8f1766c506474
m30000| Thu Jun 14 01:34:07 [conn6] splitChunk accepted at version 2|0||4fd977cce703b5476a9e2a0a
m30000| Thu Jun 14 01:34:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51384", time: new Date(1339652047637), what: "split", ns: "test.stuff", details: { before: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 10.0 }, max: { _id: 12.0 }, lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|9||4fd977cce703b5476a9e2a0a and 6 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 3 chunks into new chunk manager for test.stuff with version 2|11||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 9 version: 2|11||4fd977cce703b5476a9e2a0a based on: 2|9||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|11||000000000000000000000000 min: { _id: 12.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [conn6] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 12.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], shardId: "test.stuff-_id_12.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:34:07 [conn6] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' acquired, ts : 4fd977cf3ed8f1766c506475
m30000| Thu Jun 14 01:34:07 [conn6] splitChunk accepted at version 2|11||4fd977cce703b5476a9e2a0a
m30000| Thu Jun 14 01:34:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51384", time: new Date(1339652047641), what: "split", ns: "test.stuff", details: { before: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 12.0 }, max: { _id: 14.0 }, lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|11||4fd977cce703b5476a9e2a0a and 7 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|13||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 10 version: 2|13||4fd977cce703b5476a9e2a0a based on: 2|11||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|13||000000000000000000000000 min: { _id: 14.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [conn6] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 14.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], shardId: "test.stuff-_id_14.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:34:07 [conn6] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' acquired, ts : 4fd977cf3ed8f1766c506476
m30000| Thu Jun 14 01:34:07 [conn6] splitChunk accepted at version 2|13||4fd977cce703b5476a9e2a0a
m30000| Thu Jun 14 01:34:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51384", time: new Date(1339652047646), what: "split", ns: "test.stuff", details: { before: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 14.0 }, max: { _id: 16.0 }, lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|13||4fd977cce703b5476a9e2a0a and 8 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|15||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 11 version: 2|15||4fd977cce703b5476a9e2a0a based on: 2|13||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] splitting: test.stuff shard: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|15||000000000000000000000000 min: { _id: 16.0 } max: { _id: MaxKey }
m30000| Thu Jun 14 01:34:07 [conn6] received splitChunk request: { splitChunk: "test.stuff", keyPattern: { _id: 1.0 }, min: { _id: 16.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], shardId: "test.stuff-_id_16.0", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:34:07 [conn6] created new distributed lock for test.stuff on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' acquired, ts : 4fd977cf3ed8f1766c506477
m30000| Thu Jun 14 01:34:07 [conn6] splitChunk accepted at version 2|15||4fd977cce703b5476a9e2a0a
m30000| Thu Jun 14 01:34:07 [conn6] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:07-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:51384", time: new Date(1339652047650), what: "split", ns: "test.stuff", details: { before: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 16.0 }, max: { _id: 18.0 }, lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') }, right: { min: { _id: 18.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a') } } }
m30000| Thu Jun 14 01:34:07 [conn6] distributed lock 'test.stuff/domU-12-31-39-01-70-B4:30000:1339652047:1652723532' unlocked.
m30999| Thu Jun 14 01:34:07 [conn] loading chunk manager for collection test.stuff using old chunk manager w/ version 2|15||4fd977cce703b5476a9e2a0a and 9 chunks
m30999| Thu Jun 14 01:34:07 [conn] loaded 2 chunks into new chunk manager for test.stuff with version 2|17||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 12 version: 2|17||4fd977cce703b5476a9e2a0a based on: 2|15||4fd977cce703b5476a9e2a0a
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { ns: 1.0, min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.stuff-_id_MinKey", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), ns: "test.stuff", min: { _id: MinKey }, max: { _id: 2.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
ShardingTest test.stuff-_id_MinKey 2000|2 { "_id" : { $minKey : 1 } } -> { "_id" : 2 } shard0001 test.stuff
test.stuff-_id_2.0 2000|4 { "_id" : 2 } -> { "_id" : 4 } shard0001 test.stuff
test.stuff-_id_4.0 2000|6 { "_id" : 4 } -> { "_id" : 6 } shard0001 test.stuff
test.stuff-_id_6.0 2000|8 { "_id" : 6 } -> { "_id" : 8 } shard0001 test.stuff
test.stuff-_id_8.0 2000|9 { "_id" : 8 } -> { "_id" : 10 } shard0001 test.stuff
test.stuff-_id_10.0 2000|10 { "_id" : 10 } -> { "_id" : 12 } shard0000 test.stuff
test.stuff-_id_12.0 2000|12 { "_id" : 12 } -> { "_id" : 14 } shard0000 test.stuff
test.stuff-_id_14.0 2000|14 { "_id" : 14 } -> { "_id" : 16 } shard0000 test.stuff
test.stuff-_id_16.0 2000|16 { "_id" : 16 } -> { "_id" : 18 } shard0000 test.stuff
test.stuff-_id_18.0 2000|17 { "_id" : 18 } -> { "_id" : { $maxKey : 1 } } shard0000 test.stuff
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0000" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0000" } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff my last seq: 4 current: 12 version: 2|17||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion shard0000 localhost:30000 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 2000|17, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0000", shardHost: "localhost:30000" } 0x9175810
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), ok: 1.0 }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] needed to set remote version on connection to value compatible with [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff my last seq: 4 current: 12 version: 2|9||4fd977cce703b5476a9e2a0a manager: 0x9179888
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion shard0001 localhost:30001 test.stuff { setShardVersion: "test.stuff", configdb: "localhost:30000", version: Timestamp 2000|9, versionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), serverID: ObjectId('4fd977cce703b5476a9e2a08'), shard: "shard0001", shardHost: "localhost:30001" } 0x9175ca8
m30999| Thu Jun 14 01:34:07 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd977cce703b5476a9e2a0a'), ok: 1.0 }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] needed to set remote version on connection to value compatible with [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { _id: MinKey } max: { _id: 2.0 } dataWritten: 8097271 splitThreshold: 23592960
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|6||000000000000000000000000 min: { _id: 4.0 } max: { _id: 6.0 } dataWritten: 6429323 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { _id: 6.0 } max: { _id: 8.0 } dataWritten: 5692932 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { _id: 8.0 } max: { _id: 10.0 } dataWritten: 8833113 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|10||000000000000000000000000 min: { _id: 10.0 } max: { _id: 12.0 } dataWritten: 7328956 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|12||000000000000000000000000 min: { _id: 12.0 } max: { _id: 14.0 } dataWritten: 10142300 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|14||000000000000000000000000 min: { _id: 14.0 } max: { _id: 16.0 } dataWritten: 7862524 splitThreshold: 26214400
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] about to initiate autosplit: ns:test.stuff at: shard0000:localhost:30000 lastmod: 2|17||000000000000000000000000 min: { _id: 18.0 } max: { _id: MaxKey } dataWritten: 6586969 splitThreshold: 23592960
m30999| Thu Jun 14 01:34:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { b: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { b: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 0.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 0.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 0.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 0.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 2.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 2.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 2.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 2.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 3.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 3.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 3.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 3.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 4.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 4.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 4.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 4.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 5.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 5.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 5.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 5.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 6.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 6.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 6.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 6.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 7.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 7.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 7.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 7.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 8.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 8.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 8.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 8.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 9.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 9.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 9.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 9.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 10.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 10.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 10.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 10.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 10.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 9.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 11.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 11.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 11.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 11.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 8.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 12.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 12.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 12.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 12.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 7.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 13.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 13.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 13.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 13.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 6.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 14.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 14.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 14.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 14.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 15.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 15.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 15.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 15.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 4.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 16.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 16.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 16.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 16.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 3.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 17.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 17.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 17.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 17.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 18.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 18.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 18.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 18.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 19.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 19.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: {} }, fields: {} } and CInfo { v_ns: "test.stuff", filter: {} }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 2 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] creating pcursor over QSpec { ns: "test.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "stuff", query: { _id: 19.0 } }, fields: {} } and CInfo { v_ns: "test.stuff", filter: { _id: 19.0 } }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing over 1 shards required by [test.stuff @ 2|17||4fd977cce703b5476a9e2a0a]
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:07 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.stuff @ 2|17||4fd977cce703b5476a9e2a0a", cursor: { n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:07 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:34:07 [conn7] end connection 127.0.0.1:51387 (11 connections now open)
m30000| Thu Jun 14 01:34:07 [conn3] end connection 127.0.0.1:51379 (10 connections now open)
m30000| Thu Jun 14 01:34:07 [conn4] end connection 127.0.0.1:51382 (9 connections now open)
m30000| Thu Jun 14 01:34:07 [conn6] end connection 127.0.0.1:51384 (8 connections now open)
m30001| Thu Jun 14 01:34:07 [conn3] end connection 127.0.0.1:42795 (4 connections now open)
m30001| Thu Jun 14 01:34:07 [conn4] end connection 127.0.0.1:42796 (3 connections now open)
Thu Jun 14 01:34:08 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:08 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:08 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:08 dbexit:
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:08 [interruptThread] closing listening socket: 24
m30000| Thu Jun 14 01:34:08 [interruptThread] closing listening socket: 25
m30000| Thu Jun 14 01:34:08 [interruptThread] closing listening socket: 26
m30000| Thu Jun 14 01:34:08 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:08 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:34:08 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:34:08 dbexit: really exiting now
m30001| Thu Jun 14 01:34:08 [conn5] end connection 127.0.0.1:42800 (2 connections now open)
Thu Jun 14 01:34:09 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:09 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:09 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:09 dbexit:
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:09 [interruptThread] closing listening socket: 27
m30001| Thu Jun 14 01:34:09 [interruptThread] closing listening socket: 28
m30001| Thu Jun 14 01:34:09 [interruptThread] closing listening socket: 29
m30001| Thu Jun 14 01:34:09 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:34:09 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:09 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:09 dbexit: really exiting now
Thu Jun 14 01:34:10 shell: stopped mongo program on port 30001
*** ShardingTest find_and_modify_sharded completed successfully in 7.5 seconds ***
7558.176994ms
Thu Jun 14 01:34:10 [initandlisten] connection accepted from 127.0.0.1:54899 #29 (16 connections now open)
*******************************************
Test : findandmodify2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/findandmodify2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/findandmodify2.js";TestData.testFile = "findandmodify2.js";TestData.testName = "findandmodify2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:34:10 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/find_and_modify_sharded_20'
Thu Jun 14 01:34:10 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/find_and_modify_sharded_20
m30000| Thu Jun 14 01:34:10
m30000| Thu Jun 14 01:34:10 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:34:10
m30000| Thu Jun 14 01:34:10 [initandlisten] MongoDB starting : pid=24742 port=30000 dbpath=/data/db/find_and_modify_sharded_20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:34:10 [initandlisten]
m30000| Thu Jun 14 01:34:10 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:34:10 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:34:10 [initandlisten]
m30000| Thu Jun 14 01:34:10 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:34:10 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:34:10 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:34:10 [initandlisten]
m30000| Thu Jun 14 01:34:10 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:34:10 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:10 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:34:10 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded_20", port: 30000 }
m30000| Thu Jun 14 01:34:10 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:34:10 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/find_and_modify_sharded_21'
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51398 #1 (1 connection now open)
Thu Jun 14 01:34:11 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/find_and_modify_sharded_21
m30001| Thu Jun 14 01:34:11
m30001| Thu Jun 14 01:34:11 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:34:11
m30001| Thu Jun 14 01:34:11 [initandlisten] MongoDB starting : pid=24755 port=30001 dbpath=/data/db/find_and_modify_sharded_21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:34:11 [initandlisten]
m30001| Thu Jun 14 01:34:11 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:34:11 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:34:11 [initandlisten]
m30001| Thu Jun 14 01:34:11 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:34:11 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:34:11 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:34:11 [initandlisten]
m30001| Thu Jun 14 01:34:11 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:34:11 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:34:11 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:34:11 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded_21", port: 30001 }
m30001| Thu Jun 14 01:34:11 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:34:11 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:42807 #1 (1 connection now open)
ShardingTest find_and_modify_sharded_2 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51401 #2 (2 connections now open)
m30000| Thu Jun 14 01:34:11 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:34:11 [FileAllocator] creating directory /data/db/find_and_modify_sharded_20/_tmp
Thu Jun 14 01:34:11 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -vv
m30999| Thu Jun 14 01:34:11 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:34:11 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24770 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:34:11 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:34:11 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:34:11 [mongosMain] options: { configdb: "localhost:30000", port: 30999, vv: true }
m30999| Thu Jun 14 01:34:11 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:34:11 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51403 #3 (3 connections now open)
m30999| Thu Jun 14 01:34:11 [mongosMain] connected connection!
m30000| Thu Jun 14 01:34:11 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.ns, size: 16MB, took 0.262 secs
m30000| Thu Jun 14 01:34:11 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.0, filling with zeroes...
m30999| Thu Jun 14 01:34:11 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:11 [mongosMain] connected connection!
m30000| Thu Jun 14 01:34:11 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.0, size: 16MB, took 0.24 secs
m30000| Thu Jun 14 01:34:11 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn2] insert config.settings keyUpdates:0 locks(micros) w:526055 525ms
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51406 #4 (4 connections now open)
m30000| Thu Jun 14 01:34:11 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [mongosMain] MaxChunkSize: 1
m30999| Thu Jun 14 01:34:11 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:34:11 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:34:11 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:34:11 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:34:11 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:34:11 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:34:11 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:34:11 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:34:11 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:34:11
m30999| Thu Jun 14 01:34:11 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:11 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:34:11 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:11 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.1, filling with zeroes...
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:11 [Balancer] connected connection!
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51407 #5 (5 connections now open)
m30999| Thu Jun 14 01:34:11 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:34:11 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:34:11 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:34:11 [Balancer] skew from remote server localhost:30000 found: -1
m30999| Thu Jun 14 01:34:11 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds.
m30999| Thu Jun 14 01:34:11 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:34:11 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652051:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652051:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652051:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:34:11 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd977d386952769019d8c9d" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:34:11 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652051:1804289383' acquired, ts : 4fd977d386952769019d8c9d
m30999| Thu Jun 14 01:34:11 [Balancer] *** start balancing round
m30000| Thu Jun 14 01:34:11 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [Balancer] no collections to balance
m30999| Thu Jun 14 01:34:11 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:34:11 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:34:11 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652051:1804289383' unlocked.
m30999| Thu Jun 14 01:34:11 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652051:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:34:11 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:34:11 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652051:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:34:11 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:34:11 [mongosMain] connection accepted from 127.0.0.1:43387 #1 (1 connection now open)
m30999| Thu Jun 14 01:34:11 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:34:11 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:34:11 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:34:11 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:42816 #2 (2 connections now open)
m30999| Thu Jun 14 01:34:11 [conn] connected connection!
m30999| Thu Jun 14 01:34:11 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:34:11 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:34:11 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:34:11 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:11 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:51410 #6 (6 connections now open)
m30999| Thu Jun 14 01:34:11 [conn] connected connection!
m30999| Thu Jun 14 01:34:11 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977d386952769019d8c9c
m30999| Thu Jun 14 01:34:11 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:34:11 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true }
m30999| Thu Jun 14 01:34:11 [conn] creating new connection to:localhost:30001
---------- Creating large payload...
---------- Done.
m30001| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:42818 #3 (3 connections now open)
m30001| Thu Jun 14 01:34:11 [conn3] CMD: drop test.stuff_col_update
m30001| Thu Jun 14 01:34:11 [conn3] CMD: drop test.stuff_col_update_upsert
m30001| Thu Jun 14 01:34:11 [conn3] CMD: drop test.stuff_col_fam
m30001| Thu Jun 14 01:34:11 [conn3] CMD: drop test.stuff_col_fam_upsert
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:11 [conn] connected connection!
m30999| Thu Jun 14 01:34:11 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977d386952769019d8c9c
m30999| Thu Jun 14 01:34:11 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:34:11 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:11 [conn] DROP: test.stuff_col_update
m30999| Thu Jun 14 01:34:11 [conn] DROP: test.stuff_col_update_upsert
m30999| Thu Jun 14 01:34:11 [conn] DROP: test.stuff_col_fam
m30999| Thu Jun 14 01:34:11 [conn] DROP: test.stuff_col_fam_upsert
m30999| Thu Jun 14 01:34:11 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:34:11 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:34:11 [conn] connected connection!
m30999| Thu Jun 14 01:34:11 [conn] CMD: shardcollection: { shardcollection: "test.stuff_col_update", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:11 [conn] enable sharding on: test.stuff_col_update with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:11 [conn] going to create 1 chunk(s) for: test.stuff_col_update using new epoch 4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:11 [conn] loaded 1 chunks into new chunk manager for test.stuff_col_update with version 1|0||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:11 [conn] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 2 version: 1|0||4fd977d386952769019d8c9e based on: (empty)
m30000| Thu Jun 14 01:34:11 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:34:11 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:11 [conn] resetting shard version of test.stuff_col_update on localhost:30000, version is zero
m30999| Thu Jun 14 01:34:11 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_update my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x9d02188
m30999| Thu Jun 14 01:34:11 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30999| Thu Jun 14 01:34:11 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:11 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 0 current: 2 version: 1|0||4fd977d386952769019d8c9e manager: 0x9d02188
m30999| Thu Jun 14 01:34:11 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30001| Thu Jun 14 01:34:11 [initandlisten] connection accepted from 127.0.0.1:42819 #4 (4 connections now open)
m30001| Thu Jun 14 01:34:11 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:34:11 [FileAllocator] creating directory /data/db/find_and_modify_sharded_21/_tmp
m30000| Thu Jun 14 01:34:12 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.1, size: 32MB, took 0.606 secs
m30001| Thu Jun 14 01:34:12 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.ns, size: 16MB, took 0.314 secs
m30001| Thu Jun 14 01:34:12 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:13 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.0, size: 16MB, took 0.307 secs
m30001| Thu Jun 14 01:34:13 [conn4] build index test.stuff_col_update { _id: 1 }
m30001| Thu Jun 14 01:34:13 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:13 [conn4] info: creating collection test.stuff_col_update on add index
m30001| Thu Jun 14 01:34:13 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) r:301 w:1108747 1108ms
m30001| Thu Jun 14 01:34:13 [conn3] command admin.$cmd command: { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:94 w:213 reslen:199 1106ms
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_update", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_update'", ok: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 0 current: 2 version: 1|0||4fd977d386952769019d8c9e manager: 0x9d02188
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30001| Thu Jun 14 01:34:13 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:34:13 [initandlisten] connection accepted from 127.0.0.1:51413 #7 (7 connections now open)
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] CMD: shardcollection: { shardcollection: "test.stuff_col_update_upsert", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:13 [conn] enable sharding on: test.stuff_col_update_upsert with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] going to create 1 chunk(s) for: test.stuff_col_update_upsert using new epoch 4fd977d586952769019d8c9f
m30001| Thu Jun 14 01:34:13 [conn4] build index test.stuff_col_update_upsert { _id: 1 }
m30001| Thu Jun 14 01:34:13 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:13 [conn4] info: creating collection test.stuff_col_update_upsert on add index
m30999| Thu Jun 14 01:34:13 [conn] loaded 1 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|0||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 3 version: 1|0||4fd977d586952769019d8c9f based on: (empty)
m30999| Thu Jun 14 01:34:13 [conn] resetting shard version of test.stuff_col_update_upsert on localhost:30000, version is zero
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 0|0||000000000000000000000000 manager: 0x9d01e10
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 1|0||4fd977d586952769019d8c9f manager: 0x9d01e10
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_update_upsert", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_update_upsert'", ok: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 1|0||4fd977d586952769019d8c9f manager: 0x9d01e10
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30001| Thu Jun 14 01:34:13 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] CMD: shardcollection: { shardcollection: "test.stuff_col_fam", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:13 [conn] enable sharding on: test.stuff_col_fam with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] going to create 1 chunk(s) for: test.stuff_col_fam using new epoch 4fd977d586952769019d8ca0
m30001| Thu Jun 14 01:34:13 [conn4] build index test.stuff_col_fam { _id: 1 }
m30001| Thu Jun 14 01:34:13 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:13 [conn4] info: creating collection test.stuff_col_fam on add index
m30999| Thu Jun 14 01:34:13 [conn] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|0||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 4 version: 1|0||4fd977d586952769019d8ca0 based on: (empty)
m30999| Thu Jun 14 01:34:13 [conn] resetting shard version of test.stuff_col_fam on localhost:30000, version is zero
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 0|0||000000000000000000000000 manager: 0x9d02f88
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 1|0||4fd977d586952769019d8ca0 manager: 0x9d02f88
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_fam", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_fam'", ok: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 1|0||4fd977d586952769019d8ca0 manager: 0x9d02f88
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30001| Thu Jun 14 01:34:13 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] CMD: shardcollection: { shardcollection: "test.stuff_col_fam_upsert", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:13 [conn] enable sharding on: test.stuff_col_fam_upsert with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] going to create 1 chunk(s) for: test.stuff_col_fam_upsert using new epoch 4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] build index test.stuff_col_fam_upsert { _id: 1 }
m30001| Thu Jun 14 01:34:13 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:13 [conn4] info: creating collection test.stuff_col_fam_upsert on add index
m30999| Thu Jun 14 01:34:13 [conn] loaded 1 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|0||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 5 version: 1|0||4fd977d586952769019d8ca1 based on: (empty)
m30999| Thu Jun 14 01:34:13 [conn] resetting shard version of test.stuff_col_fam_upsert on localhost:30000, version is zero
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 0|0||000000000000000000000000 manager: 0x9d03798
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 1|0||4fd977d586952769019d8ca1 manager: 0x9d03798
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_fam_upsert", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_fam_upsert'", ok: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 1|0||4fd977d586952769019d8ca1 manager: 0x9d03798
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30001| Thu Jun 14 01:34:13 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
---------- Update via findAndModify...
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 115505 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30001| Thu Jun 14 01:34:13 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.1, filling with zeroes...
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_fam-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2171
m30001| Thu Jun 14 01:34:13 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652053:262699440 (sleeping for 30000ms)
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|0||4fd977d586952769019d8ca0
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053106), what: "split", ns: "test.stuff_col_fam", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2172
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|2||4fd977d586952769019d8ca0
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053135), what: "split", ns: "test.stuff_col_fam", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') }, right: { min: { _id: 99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 42.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2173
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|4||4fd977d586952769019d8ca0
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053222), what: "split", ns: "test.stuff_col_fam", details: { before: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 42.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') }, right: { min: { _id: 42.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : 42.0 }
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 42.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 42.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 42.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 42.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 69.0 } ], shardId: "test.stuff_col_fam-_id_42.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2174
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|6||4fd977d586952769019d8ca0
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053243), what: "split", ns: "test.stuff_col_fam", details: { before: { min: { _id: 42.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 42.0 }, max: { _id: 69.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') }, right: { min: { _id: 69.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 42.0 } -->> { : 69.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|0||4fd977d586952769019d8ca0 and 1 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam with version 1|2||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 6 version: 1|2||4fd977d586952769019d8ca0 based on: 1|0||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam shard: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 4 current: 6 version: 1|2||4fd977d586952769019d8ca0 manager: 0x9d03c20
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca0'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 145668 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 101343 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||4fd977d586952769019d8ca0 and 2 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam with version 1|4||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 7 version: 1|4||4fd977d586952769019d8ca0 based on: 1|2||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam shard: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 99.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:34:13 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:34:13 [conn] recently split chunk: { min: { _id: 99.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 6 current: 7 version: 1|4||4fd977d586952769019d8ca0 manager: 0x9d02f88
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca0'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 240981 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|4||4fd977d586952769019d8ca0 and 3 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 3 chunks into new chunk manager for test.stuff_col_fam with version 1|6||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 8 version: 1|6||4fd977d586952769019d8ca0 based on: 1|4||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam shard: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } on: { _id: 42.0 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 7 current: 8 version: 1|6||4fd977d586952769019d8ca0 manager: 0x9d00990
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca0'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: 0.0 } max: { _id: 42.0 } dataWritten: 239837 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 39.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 42.0 } max: { _id: 99.0 } dataWritten: 241185 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 73.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 42.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|6||4fd977d586952769019d8ca0 and 4 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam with version 1|8||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 9 version: 1|8||4fd977d586952769019d8ca0 based on: 1|6||4fd977d586952769019d8ca0
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam shard: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 42.0 } max: { _id: 99.0 } on: { _id: 69.0 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 8 current: 9 version: 1|8||4fd977d586952769019d8ca0 manager: 0x9d02f88
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd977d586952769019d8ca0'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca0'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 42.0 } max: { _id: 69.0 } dataWritten: 239698 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 67.0 }
m30000| Thu Jun 14 01:34:13 [initandlisten] connection accepted from 127.0.0.1:51414 #8 (8 connections now open)
m30000| Thu Jun 14 01:34:13 [initandlisten] connection accepted from 127.0.0.1:51415 #9 (9 connections now open)
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 42.0 } max: { _id: 69.0 } dataWritten: 229558 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 65.0 }
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 42.0 } -->> { : 69.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 69.0 } max: { _id: 99.0 } dataWritten: 223003 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 69.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 90.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 69.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 88.0 }
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 69.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 69.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 69.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 86.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 69.0 } max: { _id: 99.0 } dataWritten: 229558 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam { : 69.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 85.0 }
---------- Done.
---------- Upsert via findAndModify...
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 81098 splitThreshold: 921
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32794 splitThreshold: 921
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32794 splitThreshold: 921
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_fam_upsert-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2175
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|0||4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053359), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|0||4fd977d586952769019d8ca1 and 1 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|2||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 10 version: 1|2||4fd977d586952769019d8ca1 based on: 1|0||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 5 current: 10 version: 1|2||4fd977d586952769019d8ca1 manager: 0x9d00990
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca1'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 180841 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 15.0 } ], shardId: "test.stuff_col_fam_upsert-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2176
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|2||4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053376), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|2||4fd977d586952769019d8ca1 and 2 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|4||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 11 version: 1|4||4fd977d586952769019d8ca1 based on: 1|2||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 15.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:34:13 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:34:13 [conn] recently split chunk: { min: { _id: 15.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 10 current: 11 version: 1|4||4fd977d586952769019d8ca1 manager: 0x9d03c10
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca1'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 214560 splitThreshold: 943718
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 30.0 }
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 30.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 47.0 } ], shardId: "test.stuff_col_fam_upsert-_id_15.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2177
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|4||4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053445), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 47.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') }, right: { min: { _id: 47.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|4||4fd977d586952769019d8ca1 and 3 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|6||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 12 version: 1|6||4fd977d586952769019d8ca1 based on: 1|4||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } on: { _id: 47.0 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:34:13 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:34:13 [conn] recently split chunk: { min: { _id: 47.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:13 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 11 current: 12 version: 1|6||4fd977d586952769019d8ca1 manager: 0x9d00990
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:13 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8ca1'), ok: 1.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 194539 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 62.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 62.0 }
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:13 [conn] chunk not full enough to trigger auto-split { _id: 62.0 }
m30001| Thu Jun 14 01:34:13 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.1, size: 32MB, took 0.65 secs
m30001| Thu Jun 14 01:34:13 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.2, filling with zeroes...
m30000| Thu Jun 14 01:34:13 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:34:13 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30999| Thu Jun 14 01:34:13 [conn] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|6||4fd977d586952769019d8ca1 and 4 chunks
m30999| Thu Jun 14 01:34:13 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|8||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 13 version: 1|8||4fd977d586952769019d8ca1 based on: 1|6||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:13 [conn] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: MaxKey } on: { _id: 82.0 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:34:13 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:34:13 [conn] moving chunk (auto): ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 82.0 } max: { _id: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:34:13 [conn] moving chunk ns: test.stuff_col_fam_upsert moving ( ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 82.0 } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:13 [conn3] command test.$cmd command: { findandmodify: "stuff_col_fam_upsert", query: { _id: 80.0 }, update: { $set: { big: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } }, upsert: true } update: { $set: { big: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } } ntoreturn:1 nscanned:0 idhack:1 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) W:122 r:528 w:586298 reslen:44 270ms
m30001| Thu Jun 14 01:34:13 [conn4] request split points lookup for chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 47.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:13 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 47.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 82.0 } ], shardId: "test.stuff_col_fam_upsert-_id_47.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2178
m30001| Thu Jun 14 01:34:13 [conn4] splitChunk accepted at version 1|6||4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053753), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 47.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 47.0 }, max: { _id: 82.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') }, right: { min: { _id: 82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd977d586952769019d8ca1') } } }
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:13 [conn4] received moveChunk request: { moveChunk: "test.stuff_col_fam_upsert", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 82.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.stuff_col_fam_upsert-_id_82.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:13 [conn4] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:13 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d5c290e667b18d2179
m30001| Thu Jun 14 01:34:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:13-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652053757), what: "moveChunk.start", ns: "test.stuff_col_fam_upsert", details: { min: { _id: 82.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:13 [conn4] moveChunk request accepted at version 1|8||4fd977d586952769019d8ca1
m30001| Thu Jun 14 01:34:13 [conn4] moveChunk number of documents: 1
m30001| Thu Jun 14 01:34:13 [initandlisten] connection accepted from 127.0.0.1:42823 #5 (5 connections now open)
m30000| Thu Jun 14 01:34:14 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/test.ns, size: 16MB, took 0.952 secs
m30000| Thu Jun 14 01:34:14 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:14 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam_upsert", from: "localhost:30001", min: { _id: 82.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:15 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.2, size: 64MB, took 1.477 secs
m30000| Thu Jun 14 01:34:15 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/test.0, size: 16MB, took 0.815 secs
m30000| Thu Jun 14 01:34:15 [migrateThread] build index test.stuff_col_fam_upsert { _id: 1 }
m30000| Thu Jun 14 01:34:15 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:15 [migrateThread] info: creating collection test.stuff_col_fam_upsert on add index
m30000| Thu Jun 14 01:34:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff_col_fam_upsert' { _id: 82.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:34:15 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.1, filling with zeroes...
m30001| Thu Jun 14 01:34:15 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam_upsert", from: "localhost:30001", min: { _id: 82.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 32796, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:15 [conn4] moveChunk setting version to: 2|0||4fd977d586952769019d8ca1
m30000| Thu Jun 14 01:34:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff_col_fam_upsert' { _id: 82.0 } -> { _id: MaxKey }
m30000| Thu Jun 14 01:34:15 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652055780), what: "moveChunk.to", ns: "test.stuff_col_fam_upsert", details: { min: { _id: 82.0 }, max: { _id: MaxKey }, step1 of 5: 1778, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 242 } }
m30000| Thu Jun 14 01:34:15 [initandlisten] connection accepted from 127.0.0.1:51417 #10 (10 connections now open)
m30999| Thu Jun 14 01:34:15 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|8||4fd977d586952769019d8ca1 and 5 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 2|1||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 14 version: 2|1||4fd977d586952769019d8ca1 based on: 1|8||4fd977d586952769019d8ca1
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam_upsert my last seq: 5 current: 14 version: 2|0||4fd977d586952769019d8ca1 manager: 0x9d040e8
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_fam_upsert", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_fam_upsert'", ok: 0.0 }
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam_upsert my last seq: 5 current: 14 version: 2|0||4fd977d586952769019d8ca1 manager: 0x9d040e8
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0000 localhost:30000 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd977d586952769019d8ca1'), serverID: ObjectId('4fd977d386952769019d8c9c'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9cfdcd0
m30000| Thu Jun 14 01:34:15 [conn6] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 82.0 } max: { _id: MaxKey } dataWritten: 220871 splitThreshold: 943718
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { _id: 82.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:34:15 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff_col_fam_upsert", from: "localhost:30001", min: { _id: 82.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 32796, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:15 [conn4] moveChunk updating self version to: 2|1||4fd977d586952769019d8ca1 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.stuff_col_fam_upsert'
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055784), what: "moveChunk.commit", ns: "test.stuff_col_fam_upsert", details: { min: { _id: 82.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:15 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:15 [conn4] moveChunk deleted: 1
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_fam_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055785), what: "moveChunk.from", ns: "test.stuff_col_fam_upsert", details: { min: { _id: 82.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2010, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:15 [conn4] command admin.$cmd command: { moveChunk: "test.stuff_col_fam_upsert", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 82.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.stuff_col_fam_upsert-_id_82.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:4970 w:1110664 reslen:37 2029ms
---------- Done.
---------- Basic update...
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 58334 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_update-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217a
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|0||4fd977d386952769019d8c9e
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055849), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|0||4fd977d386952769019d8c9e and 1 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|2||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 15 version: 1|2||4fd977d386952769019d8c9e based on: 1|0||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 2 current: 15 version: 1|2||4fd977d386952769019d8c9e manager: 0x9d00990
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d386952769019d8c9e'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 207486 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 101628 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_update-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217b
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|2||4fd977d386952769019d8c9e
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055869), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') }, right: { min: { _id: 99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|2||4fd977d386952769019d8c9e and 2 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|4||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 16 version: 1|4||4fd977d386952769019d8c9e based on: 1|2||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 99.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:34:15 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:34:15 [conn] recently split chunk: { min: { _id: 99.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 15 current: 16 version: 1|4||4fd977d386952769019d8c9e manager: 0x9d03c10
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d386952769019d8c9e'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 220900 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 0.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 43.0 } ], shardId: "test.stuff_col_update-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217c
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|4||4fd977d386952769019d8c9e
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055913), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 43.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') }, right: { min: { _id: 43.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|4||4fd977d386952769019d8c9e and 3 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 3 chunks into new chunk manager for test.stuff_col_update with version 1|6||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 17 version: 1|6||4fd977d386952769019d8c9e based on: 1|4||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } on: { _id: 43.0 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 16 current: 17 version: 1|6||4fd977d386952769019d8c9e manager: 0x9d00990
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d386952769019d8c9e'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: 0.0 } max: { _id: 43.0 } dataWritten: 235770 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : 43.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 39.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 43.0 } max: { _id: 99.0 } dataWritten: 235905 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 43.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 76.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 43.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 43.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 72.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 43.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 43.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 43.0 } -->> { : 99.0 }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 43.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 68.0 } ], shardId: "test.stuff_col_update-_id_43.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217d
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|6||4fd977d386952769019d8c9e
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055926), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 43.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 43.0 }, max: { _id: 68.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') }, right: { min: { _id: 68.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd977d386952769019d8c9e') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|6||4fd977d386952769019d8c9e and 4 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|8||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 18 version: 1|8||4fd977d386952769019d8c9e based on: 1|6||4fd977d386952769019d8c9e
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 43.0 } max: { _id: 99.0 } on: { _id: 68.0 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 17 current: 18 version: 1|8||4fd977d386952769019d8c9e manager: 0x9d03c10
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd977d386952769019d8c9e'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d386952769019d8c9e'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 43.0 } max: { _id: 68.0 } dataWritten: 215763 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 43.0 } -->> { : 68.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 67.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 68.0 } max: { _id: 99.0 } dataWritten: 232121 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 68.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 88.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 68.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 68.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 86.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 68.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 68.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 85.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 68.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update { : 68.0 } -->> { : 99.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 84.0 }
---------- Done.
---------- Basic update with upsert...
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 175151 splitThreshold: 921
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32849 splitThreshold: 921
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split { _id: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32849 splitThreshold: 921
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_update_upsert-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217e
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|0||4fd977d586952769019d8c9f
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055981), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|0||4fd977d586952769019d8c9f and 1 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|2||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 19 version: 1|2||4fd977d586952769019d8c9f based on: 1|0||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 3 current: 19 version: 1|2||4fd977d586952769019d8c9f manager: 0x9d00990
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8c9f'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 129724 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859
m30001| Thu Jun 14 01:34:15 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:15 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 15.0 } ], shardId: "test.stuff_col_update_upsert-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:15 [conn4] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d7c290e667b18d217f
m30001| Thu Jun 14 01:34:15 [conn4] splitChunk accepted at version 1|2||4fd977d586952769019d8c9f
m30001| Thu Jun 14 01:34:15 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:15-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652055994), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') } } }
m30001| Thu Jun 14 01:34:15 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:15 [conn] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|2||4fd977d586952769019d8c9f and 2 chunks
m30999| Thu Jun 14 01:34:15 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|4||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:15 [conn] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 20 version: 1|4||4fd977d586952769019d8c9f based on: 1|2||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:15 [conn] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 15.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:34:15 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:34:15 [conn] recently split chunk: { min: { _id: 15.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:15 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 19 current: 20 version: 1|4||4fd977d586952769019d8c9f manager: 0x9d02188
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:15 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8c9f'), ok: 1.0 }
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 238162 splitThreshold: 943718
m30999| Thu Jun 14 01:34:15 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:15 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split { _id: 30.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split { _id: 30.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:16 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:16 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 46.0 } ], shardId: "test.stuff_col_update_upsert-_id_15.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:16 [conn4] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:16 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d8c290e667b18d2180
m30001| Thu Jun 14 01:34:16 [conn4] splitChunk accepted at version 1|4||4fd977d586952769019d8c9f
m30001| Thu Jun 14 01:34:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:16-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652056009), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 46.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') }, right: { min: { _id: 46.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') } } }
m30001| Thu Jun 14 01:34:16 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:16 [conn] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|4||4fd977d586952769019d8c9f and 3 chunks
m30999| Thu Jun 14 01:34:16 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|6||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:16 [conn] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 21 version: 1|6||4fd977d586952769019d8c9f based on: 1|4||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:16 [conn] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } on: { _id: 46.0 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:34:16 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:34:16 [conn] recently split chunk: { min: { _id: 46.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:16 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 20 current: 21 version: 1|6||4fd977d586952769019d8c9f manager: 0x9d00990
m30999| Thu Jun 14 01:34:16 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:16 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8c9f'), ok: 1.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 210747 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split { _id: 61.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split { _id: 61.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:16 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 46.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:16 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 46.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 78.0 } ], shardId: "test.stuff_col_update_upsert-_id_46.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:16 [conn4] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:16 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' acquired, ts : 4fd977d8c290e667b18d2181
m30001| Thu Jun 14 01:34:16 [conn4] splitChunk accepted at version 1|6||4fd977d586952769019d8c9f
m30001| Thu Jun 14 01:34:16 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:16-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42819", time: new Date(1339652056024), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 46.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 46.0 }, max: { _id: 78.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') }, right: { min: { _id: 78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd977d586952769019d8c9f') } } }
m30001| Thu Jun 14 01:34:16 [conn4] distributed lock 'test.stuff_col_update_upsert/domU-12-31-39-01-70-B4:30001:1339652053:262699440' unlocked.
m30999| Thu Jun 14 01:34:16 [conn] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|6||4fd977d586952769019d8c9f and 4 chunks
m30999| Thu Jun 14 01:34:16 [conn] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|8||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:16 [conn] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 22 version: 1|8||4fd977d586952769019d8c9f based on: 1|6||4fd977d586952769019d8c9f
m30999| Thu Jun 14 01:34:16 [conn] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 46.0 } max: { _id: MaxKey } on: { _id: 78.0 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:34:16 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:34:16 [conn] recently split chunk: { min: { _id: 78.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:16 [conn] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 21 current: 22 version: 1|8||4fd977d586952769019d8c9f manager: 0x9d0dce0
m30999| Thu Jun 14 01:34:16 [conn] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd977d586952769019d8c9f'), serverID: ObjectId('4fd977d386952769019d8c9c'), shard: "shard0001", shardHost: "localhost:30001" } 0x9cfe230
m30999| Thu Jun 14 01:34:16 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd977d586952769019d8c9f'), ok: 1.0 }
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 78.0 } max: { _id: MaxKey } dataWritten: 217618 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 78.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 78.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 78.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:34:16 [conn] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 78.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718
m30001| Thu Jun 14 01:34:16 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 78.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:34:16 [conn] chunk not full enough to trigger auto-split { _id: 93.0 }
---------- Done.
---------- Printing chunks:
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { ns: 1.0, min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.stuff_col_fam-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977d586952769019d8ca0'), ns: "test.stuff_col_fam", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
ShardingTest test.stuff_col_fam-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_fam
test.stuff_col_fam-_id_0.0 1000|5 { "_id" : 0 } -> { "_id" : 42 } shard0001 test.stuff_col_fam
test.stuff_col_fam-_id_42.0 1000|7 { "_id" : 42 } -> { "_id" : 69 } shard0001 test.stuff_col_fam
test.stuff_col_fam-_id_69.0 1000|8 { "_id" : 69 } -> { "_id" : 99 } shard0001 test.stuff_col_fam
test.stuff_col_fam-_id_99.0 1000|4 { "_id" : 99 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_fam
test.stuff_col_fam_upsert-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_fam_upsert
test.stuff_col_fam_upsert-_id_0.0 1000|3 { "_id" : 0 } -> { "_id" : 15 } shard0001 test.stuff_col_fam_upsert
test.stuff_col_fam_upsert-_id_15.0 1000|5 { "_id" : 15 } -> { "_id" : 47 } shard0001 test.stuff_col_fam_upsert
test.stuff_col_fam_upsert-_id_47.0 1000|7 { "_id" : 47 } -> { "_id" : 82 } shard0001 test.stuff_col_fam_upsert
test.stuff_col_fam_upsert-_id_82.0 2000|0 { "_id" : 82 } -> { "_id" : { $maxKey : 1 } } shard0000 test.stuff_col_fam_upsert
test.stuff_col_update-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_update
test.stuff_col_update-_id_0.0 1000|5 { "_id" : 0 } -> { "_id" : 43 } shard0001 test.stuff_col_update
test.stuff_col_update-_id_43.0 1000|7 { "_id" : 43 } -> { "_id" : 68 } shard0001 test.stuff_col_update
test.stuff_col_update-_id_68.0 1000|8 { "_id" : 68 } -> { "_id" : 99 } shard0001 test.stuff_col_update
test.stuff_col_update-_id_99.0 1000|4 { "_id" : 99 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_update
test.stuff_col_update_upsert-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_update_upsert
test.stuff_col_update_upsert-_id_0.0 1000|3 { "_id" : 0 } -> { "_id" : 15 } shard0001 test.stuff_col_update_upsert
test.stuff_col_update_upsert-_id_15.0 1000|5 { "_id" : 15 } -> { "_id" : 46 } shard0001 test.stuff_col_update_upsert
test.stuff_col_update_upsert-_id_46.0 1000|7 { "_id" : 46 } -> { "_id" : 78 } shard0001 test.stuff_col_update_upsert
test.stuff_col_update_upsert-_id_78.0 1000|8 { "_id" : 78 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_update_upsert
---------- Verifying that both codepaths resulted in splits...
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.stuff_col_fam" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.stuff_col_fam" } }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.stuff_col_fam_upsert" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.stuff_col_fam_upsert" } }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.stuff_col_update" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.stuff_col_update" } }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.stuff_col_update_upsert" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.stuff_col_update_upsert" } }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 5.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:34:16 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:34:16 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.stuff_col_update",
"count" : 100,
"numExtents" : 4,
"size" : 3443204,
"storageSize" : 11104256,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"avgObjSize" : 34432.04,
"nindexes" : 1,
"nchunks" : 5,
"shards" : {
"shard0001" : {
"ns" : "test.stuff_col_update",
"count" : 100,
"size" : 3443204,
"avgObjSize" : 34432.04,
"storageSize" : 11104256,
"numExtents" : 4,
"nindexes" : 1,
"lastExtentSize" : 8454144,
"paddingFactor" : 1.0990000000000038,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
m30999| Thu Jun 14 01:34:16 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:34:16 [conn5] end connection 127.0.0.1:51407 (9 connections now open)
m30000| Thu Jun 14 01:34:16 [conn3] end connection 127.0.0.1:51403 (9 connections now open)
m30000| Thu Jun 14 01:34:16 [conn6] end connection 127.0.0.1:51410 (7 connections now open)
m30001| Thu Jun 14 01:34:16 [conn3] end connection 127.0.0.1:42818 (4 connections now open)
m30001| Thu Jun 14 01:34:16 [conn4] end connection 127.0.0.1:42819 (3 connections now open)
m30000| Thu Jun 14 01:34:16 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/test.1, size: 32MB, took 0.842 secs
Thu Jun 14 01:34:17 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:17 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:17 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:17 dbexit:
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:17 [interruptThread] closing listening socket: 25
m30000| Thu Jun 14 01:34:17 [interruptThread] closing listening socket: 26
m30000| Thu Jun 14 01:34:17 [interruptThread] closing listening socket: 27
m30000| Thu Jun 14 01:34:17 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:17 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:34:17 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:17 [conn5] end connection 127.0.0.1:42823 (2 connections now open)
m30000| Thu Jun 14 01:34:17 [conn10] end connection 127.0.0.1:51417 (6 connections now open)
m30000| Thu Jun 14 01:34:17 dbexit: really exiting now
Thu Jun 14 01:34:18 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:18 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:18 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:18 dbexit:
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:18 [interruptThread] closing listening socket: 28
m30001| Thu Jun 14 01:34:18 [interruptThread] closing listening socket: 29
m30001| Thu Jun 14 01:34:18 [interruptThread] closing listening socket: 30
m30001| Thu Jun 14 01:34:18 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:34:18 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:18 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:18 dbexit: really exiting now
Thu Jun 14 01:34:19 shell: stopped mongo program on port 30001
*** ShardingTest find_and_modify_sharded_2 completed successfully in 8.226 seconds ***
8293.248892ms
Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:54921 #30 (17 connections now open)
*******************************************
Test : geo_near_random1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/geo_near_random1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/geo_near_random1.js";TestData.testFile = "geo_near_random1.js";TestData.testName = "geo_near_random1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:34:19 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/geo_near_random10'
Thu Jun 14 01:34:19 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/geo_near_random10
m30000| Thu Jun 14 01:34:19
m30000| Thu Jun 14 01:34:19 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:34:19
m30000| Thu Jun 14 01:34:19 [initandlisten] MongoDB starting : pid=24815 port=30000 dbpath=/data/db/geo_near_random10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:34:19 [initandlisten]
m30000| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:34:19 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:34:19 [initandlisten]
m30000| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:34:19 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:34:19 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:34:19 [initandlisten]
m30000| Thu Jun 14 01:34:19 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:34:19 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:19 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:34:19 [initandlisten] options: { dbpath: "/data/db/geo_near_random10", port: 30000 }
m30000| Thu Jun 14 01:34:19 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:34:19 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/geo_near_random11'
m30000| Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:51420 #1 (1 connection now open)
Thu Jun 14 01:34:19 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/geo_near_random11
m30001| Thu Jun 14 01:34:19
m30001| Thu Jun 14 01:34:19 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:34:19
m30001| Thu Jun 14 01:34:19 [initandlisten] MongoDB starting : pid=24828 port=30001 dbpath=/data/db/geo_near_random11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:34:19 [initandlisten]
m30001| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:34:19 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:34:19 [initandlisten]
m30001| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:34:19 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:34:19 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:34:19 [initandlisten]
m30001| Thu Jun 14 01:34:19 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:34:19 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:34:19 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:34:19 [initandlisten] options: { dbpath: "/data/db/geo_near_random11", port: 30001 }
m30001| Thu Jun 14 01:34:19 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:34:19 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/geo_near_random12'
m30001| Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:42829 #1 (1 connection now open)
Thu Jun 14 01:34:19 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/geo_near_random12
m30002| Thu Jun 14 01:34:19
m30002| Thu Jun 14 01:34:19 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:34:19
m30002| Thu Jun 14 01:34:19 [initandlisten] MongoDB starting : pid=24841 port=30002 dbpath=/data/db/geo_near_random12 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:34:19 [initandlisten]
m30002| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:34:19 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:34:19 [initandlisten]
m30002| Thu Jun 14 01:34:19 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:34:19 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:34:19 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:34:19 [initandlisten]
m30002| Thu Jun 14 01:34:19 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:34:19 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:34:19 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:34:19 [initandlisten] options: { dbpath: "/data/db/geo_near_random12", port: 30002 }
m30002| Thu Jun 14 01:34:19 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:34:19 [websvr] admin web console waiting for connections on port 31002
"localhost:30000"
m30000| Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:51425 #2 (2 connections now open)
ShardingTest geo_near_random1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:34:19 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:34:19 [FileAllocator] allocating new datafile /data/db/geo_near_random10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:34:19 [FileAllocator] creating directory /data/db/geo_near_random10/_tmp
m30002| Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:45566 #1 (1 connection now open)
m30000| Thu Jun 14 01:34:19 [initandlisten] connection accepted from 127.0.0.1:51427 #3 (3 connections now open)
m30999| Thu Jun 14 01:34:19 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:34:19 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24855 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:34:19 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:34:19 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:34:19 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:34:20 [FileAllocator] done allocating datafile /data/db/geo_near_random10/config.ns, size: 16MB, took 0.247 secs
m30000| Thu Jun 14 01:34:20 [FileAllocator] allocating new datafile /data/db/geo_near_random10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:34:20 [FileAllocator] done allocating datafile /data/db/geo_near_random10/config.0, size: 16MB, took 0.253 secs
m30000| Thu Jun 14 01:34:20 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn2] insert config.settings keyUpdates:0 locks(micros) w:511801 511ms
m30000| Thu Jun 14 01:34:20 [FileAllocator] allocating new datafile /data/db/geo_near_random10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:51430 #4 (4 connections now open)
m30000| Thu Jun 14 01:34:20 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:34:20 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:34:20 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:20 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:34:20 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:34:20 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:34:20 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:34:20 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:34:20
m30999| Thu Jun 14 01:34:20 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:51431 #5 (5 connections now open)
m30000| Thu Jun 14 01:34:20 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652060:1804289383' acquired, ts : 4fd977dcbd8e983d99560b3b
m30999| Thu Jun 14 01:34:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652060:1804289383' unlocked.
m30999| Thu Jun 14 01:34:20 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652060:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:34:20 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:20 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:34:20 [mongosMain] connection accepted from 127.0.0.1:43411 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:34:20 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:34:20 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:34:20 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30000| Thu Jun 14 01:34:20 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:34:20 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30999| Thu Jun 14 01:34:20 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30000| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:51435 #6 (6 connections now open)
m30999| Thu Jun 14 01:34:20 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977dcbd8e983d99560b3a
m30999| Thu Jun 14 01:34:20 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977dcbd8e983d99560b3a
m30999| Thu Jun 14 01:34:20 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd977dcbd8e983d99560b3a
m30002| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:45576 #2 (2 connections now open)
m30002| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:45579 #3 (3 connections now open)
m30001| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:42840 #2 (2 connections now open)
m30001| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:42843 #3 (3 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:34:20 [conn] couldn't find database [test] in config db
m30001| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:42845 #4 (4 connections now open)
m30002| Thu Jun 14 01:34:20 [initandlisten] connection accepted from 127.0.0.1:45581 #4 (4 connections now open)
m30999| Thu Jun 14 01:34:20 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:20 [conn] DROP: test.geo_near_random1
m30001| Thu Jun 14 01:34:20 [conn3] CMD: drop test.geo_near_random1
starting test: geo_near_random1
m30999| Thu Jun 14 01:34:20 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:34:20 [conn] CMD: shardcollection: { shardcollection: "test.geo_near_random1", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:20 [conn] enable sharding on: test.geo_near_random1 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:20 [conn] going to create 1 chunk(s) for: test.geo_near_random1 using new epoch 4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:20 [FileAllocator] allocating new datafile /data/db/geo_near_random11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:34:20 [FileAllocator] creating directory /data/db/geo_near_random11/_tmp
m30999| Thu Jun 14 01:34:20 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 2 version: 1|0||4fd977dcbd8e983d99560b3c based on: (empty)
m30000| Thu Jun 14 01:34:20 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:34:20 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:20 [conn] resetting shard version of test.geo_near_random1 on localhost:30000, version is zero
m30000| Thu Jun 14 01:34:20 [FileAllocator] done allocating datafile /data/db/geo_near_random10/config.1, size: 32MB, took 0.608 secs
m30001| Thu Jun 14 01:34:21 [FileAllocator] done allocating datafile /data/db/geo_near_random11/test.ns, size: 16MB, took 0.279 secs
m30001| Thu Jun 14 01:34:21 [FileAllocator] allocating new datafile /data/db/geo_near_random11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:21 [FileAllocator] done allocating datafile /data/db/geo_near_random11/test.0, size: 16MB, took 0.392 secs
m30001| Thu Jun 14 01:34:21 [conn4] build index test.geo_near_random1 { _id: 1 }
m30001| Thu Jun 14 01:34:21 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:21 [conn4] info: creating collection test.geo_near_random1 on add index
m30001| Thu Jun 14 01:34:21 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) r:254 w:1150897 1150ms
m30001| Thu Jun 14 01:34:21 [conn3] command admin.$cmd command: { setShardVersion: "test.geo_near_random1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977dcbd8e983d99560b3c'), serverID: ObjectId('4fd977dcbd8e983d99560b3a'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:69 w:104 reslen:199 1148ms
m30001| Thu Jun 14 01:34:21 [FileAllocator] allocating new datafile /data/db/geo_near_random11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:34:21 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:34:21 [initandlisten] connection accepted from 127.0.0.1:51440 #7 (7 connections now open)
m30001| Thu Jun 14 01:34:21 [conn4] request split points lookup for chunk test.geo_near_random1 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:21 [conn4] max number of requested split points reached (2) before the end of chunk test.geo_near_random1 { : MinKey } -->> { : MaxKey }
m30000| Thu Jun 14 01:34:21 [initandlisten] connection accepted from 127.0.0.1:51441 #8 (8 connections now open)
m30001| Thu Jun 14 01:34:21 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.geo_near_random1-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:21 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:21 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652061:1358337428 (sleeping for 30000ms)
m30001| Thu Jun 14 01:34:21 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977dd0a38b55e5460ac18
m30999| Thu Jun 14 01:34:21 [conn] resetting shard version of test.geo_near_random1 on localhost:30002, version is zero
m30001| Thu Jun 14 01:34:21 [conn4] splitChunk accepted at version 1|0||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:21-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652061637), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:21 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:21 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 2ms sequenceNumber: 3 version: 1|2||4fd977dcbd8e983d99560b3c based on: 1|0||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:21 [conn] autosplitted test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30001| Thu Jun 14 01:34:21 [conn3] build index test.geo_near_random1 { loc: "2d" }
m30001| Thu Jun 14 01:34:21 [conn3] build index done. scanned 50 total records. 0.013 secs
m30999| Thu Jun 14 01:34:21 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:21 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 5.0 } ], shardId: "test.geo_near_random1-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:21 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:21 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977dd0a38b55e5460ac19
m30001| Thu Jun 14 01:34:21 [conn4] splitChunk accepted at version 1|2||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:21-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652061661), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:21 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:21 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 4 version: 1|4||4fd977dcbd8e983d99560b3c based on: 1|2||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:21 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 4.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:21 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 5.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:34:21 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:21 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:21 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977dd0a38b55e5460ac1a
m30001| Thu Jun 14 01:34:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:21-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652061664), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 0.0 }, max: { _id: 5.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:21 [conn4] moveChunk request accepted at version 1|4||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:21 [conn4] moveChunk number of documents: 5
m30002| Thu Jun 14 01:34:21 [initandlisten] connection accepted from 127.0.0.1:45584 #5 (5 connections now open)
m30001| Thu Jun 14 01:34:21 [initandlisten] connection accepted from 127.0.0.1:42850 #5 (5 connections now open)
m30002| Thu Jun 14 01:34:21 [FileAllocator] allocating new datafile /data/db/geo_near_random12/test.ns, filling with zeroes...
m30002| Thu Jun 14 01:34:21 [FileAllocator] creating directory /data/db/geo_near_random12/_tmp
m30001| Thu Jun 14 01:34:22 [FileAllocator] done allocating datafile /data/db/geo_near_random11/test.1, size: 32MB, took 0.652 secs
m30002| Thu Jun 14 01:34:22 [FileAllocator] done allocating datafile /data/db/geo_near_random12/test.ns, size: 16MB, took 0.28 secs
m30002| Thu Jun 14 01:34:22 [FileAllocator] allocating new datafile /data/db/geo_near_random12/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:22 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30002| Thu Jun 14 01:34:22 [FileAllocator] done allocating datafile /data/db/geo_near_random12/test.0, size: 16MB, took 0.307 secs
m30002| Thu Jun 14 01:34:22 [migrateThread] build index test.geo_near_random1 { _id: 1 }
m30002| Thu Jun 14 01:34:22 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:34:22 [migrateThread] info: creating collection test.geo_near_random1 on add index
m30002| Thu Jun 14 01:34:22 [migrateThread] build index test.geo_near_random1 { loc: "2d" }
m30002| Thu Jun 14 01:34:22 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:34:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 0.0 } -> { _id: 5.0 }
m30002| Thu Jun 14 01:34:22 [FileAllocator] allocating new datafile /data/db/geo_near_random12/test.1, filling with zeroes...
m30002| Thu Jun 14 01:34:23 [FileAllocator] done allocating datafile /data/db/geo_near_random12/test.1, size: 32MB, took 0.587 secs
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk setting version to: 2|0||4fd977dcbd8e983d99560b3c
m30002| Thu Jun 14 01:34:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 0.0 } -> { _id: 5.0 }
m30002| Thu Jun 14 01:34:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652063684), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 0.0 }, max: { _id: 5.0 }, step1 of 5: 1205, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 812 } }
m30000| Thu Jun 14 01:34:23 [initandlisten] connection accepted from 127.0.0.1:51444 #9 (9 connections now open)
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk updating self version to: 2|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652063689), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 0.0 }, max: { _id: 5.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:23 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652063690), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 0.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2007, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:23 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:632 w:1151311 reslen:37 2027ms
m30999| Thu Jun 14 01:34:23 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 5 version: 2|1||4fd977dcbd8e983d99560b3c based on: 1|4||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:23 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 10.0 } ], shardId: "test.geo_near_random1-_id_5.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:23 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:23 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 5.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977df0a38b55e5460ac1b
m30001| Thu Jun 14 01:34:23 [conn4] splitChunk accepted at version 2|1||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652063694), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 10.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:23 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 6 version: 2|3||4fd977dcbd8e983d99560b3c based on: 2|1||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:23 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 9.0 }, to: "shard0001" }
m30001| Thu Jun 14 01:34:23 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 15.0 } ], shardId: "test.geo_near_random1-_id_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:23 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:23 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: 10.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977df0a38b55e5460ac1c
m30001| Thu Jun 14 01:34:23 [conn4] splitChunk accepted at version 2|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652063700), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 10.0 }, max: { _id: 15.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:23 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 7 version: 2|5||4fd977dcbd8e983d99560b3c based on: 2|3||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:23 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 14.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:23 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { _id: 10.0 } max: { _id: 15.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:23 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 15.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:23 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:23 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977df0a38b55e5460ac1d
m30001| Thu Jun 14 01:34:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:23-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652063703), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 10.0 }, max: { _id: 15.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk request accepted at version 2|5||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:23 [conn4] moveChunk number of documents: 5
m30001| Thu Jun 14 01:34:23 [initandlisten] connection accepted from 127.0.0.1:42852 #6 (6 connections now open)
m30000| Thu Jun 14 01:34:23 [FileAllocator] allocating new datafile /data/db/geo_near_random10/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:34:23 [FileAllocator] done allocating datafile /data/db/geo_near_random10/test.ns, size: 16MB, took 0.283 secs
m30000| Thu Jun 14 01:34:23 [FileAllocator] allocating new datafile /data/db/geo_near_random10/test.0, filling with zeroes...
m30000| Thu Jun 14 01:34:24 [FileAllocator] done allocating datafile /data/db/geo_near_random10/test.0, size: 16MB, took 0.325 secs
m30000| Thu Jun 14 01:34:24 [FileAllocator] allocating new datafile /data/db/geo_near_random10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:34:24 [migrateThread] build index test.geo_near_random1 { _id: 1 }
m30000| Thu Jun 14 01:34:24 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:24 [migrateThread] info: creating collection test.geo_near_random1 on add index
m30000| Thu Jun 14 01:34:24 [migrateThread] build index test.geo_near_random1 { loc: "2d" }
m30000| Thu Jun 14 01:34:24 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 10.0 } -> { _id: 15.0 }
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 10.0 }, max: { _id: 15.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk setting version to: 3|0||4fd977dcbd8e983d99560b3c
m30000| Thu Jun 14 01:34:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 10.0 } -> { _id: 15.0 }
m30000| Thu Jun 14 01:34:24 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:24-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652064720), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 10.0 }, max: { _id: 15.0 }, step1 of 5: 619, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 395 } }
m30000| Thu Jun 14 01:34:24 [initandlisten] connection accepted from 127.0.0.1:51446 #10 (10 connections now open)
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 10.0 }, max: { _id: 15.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk updating self version to: 3|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:24-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652064727), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 10.0 }, max: { _id: 15.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:24 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:24 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:24-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652064728), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 10.0 }, max: { _id: 15.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1004, step5 of 6: 18, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:24 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 15.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_10.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:732 w:1151660 reslen:37 1025ms
m30999| Thu Jun 14 01:34:24 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 8 version: 3|1||4fd977dcbd8e983d99560b3c based on: 2|5||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:24 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:24 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 20.0 } ], shardId: "test.geo_near_random1-_id_15.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:24 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:24 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e00a38b55e5460ac1e
m30001| Thu Jun 14 01:34:24 [conn4] splitChunk accepted at version 3|1||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:24-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652064732), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 20.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 20.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:24 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:24 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 1ms sequenceNumber: 9 version: 3|3||4fd977dcbd8e983d99560b3c based on: 3|1||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:24 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 19.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:24 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 3|2||000000000000000000000000 min: { _id: 15.0 } max: { _id: 20.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:34:24 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 15.0 }, max: { _id: 20.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_15.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:24 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:24 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e00a38b55e5460ac1f
m30001| Thu Jun 14 01:34:24 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:24-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652064736), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 15.0 }, max: { _id: 20.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk request accepted at version 3|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:24 [conn4] moveChunk number of documents: 5
m30002| Thu Jun 14 01:34:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 15.0 } -> { _id: 20.0 }
m30000| Thu Jun 14 01:34:24 [FileAllocator] done allocating datafile /data/db/geo_near_random10/test.1, size: 32MB, took 0.605 secs
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 15.0 }, max: { _id: 20.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk setting version to: 4|0||4fd977dcbd8e983d99560b3c
m30002| Thu Jun 14 01:34:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 15.0 } -> { _id: 20.0 }
m30002| Thu Jun 14 01:34:25 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652065744), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 15.0 }, max: { _id: 20.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1006 } }
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 15.0 }, max: { _id: 20.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk updating self version to: 4|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652065749), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 15.0 }, max: { _id: 20.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:25 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652065750), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 15.0 }, max: { _id: 20.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:25 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 15.0 }, max: { _id: 20.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_15.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:804 w:1151956 reslen:37 1014ms
m30999| Thu Jun 14 01:34:25 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 10 version: 4|1||4fd977dcbd8e983d99560b3c based on: 3|3||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:25 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { _id: 20.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:25 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 20.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 25.0 } ], shardId: "test.geo_near_random1-_id_20.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:25 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e10a38b55e5460ac20
m30001| Thu Jun 14 01:34:25 [conn4] splitChunk accepted at version 4|1||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652065754), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 20.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 20.0 }, max: { _id: 25.0 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 25.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:25 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 11 version: 4|3||4fd977dcbd8e983d99560b3c based on: 4|1||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:25 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 24.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:34:25 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 4|3||000000000000000000000000 min: { _id: 25.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:25 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 25.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 30.0 } ], shardId: "test.geo_near_random1-_id_25.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:25 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e10a38b55e5460ac21
m30001| Thu Jun 14 01:34:25 [conn4] splitChunk accepted at version 4|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652065759), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 25.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 25.0 }, max: { _id: 30.0 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 30.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:25 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 12 version: 4|5||4fd977dcbd8e983d99560b3c based on: 4|3||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:25 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 29.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:25 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 4|4||000000000000000000000000 min: { _id: 25.0 } max: { _id: 30.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:25 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25.0 }, max: { _id: 30.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_25.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:25 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:25 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e10a38b55e5460ac22
m30001| Thu Jun 14 01:34:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:25-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652065762), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 25.0 }, max: { _id: 30.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk request accepted at version 4|5||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:25 [conn4] moveChunk number of documents: 5
m30000| Thu Jun 14 01:34:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 25.0 } -> { _id: 30.0 }
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 25.0 }, max: { _id: 30.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk setting version to: 5|0||4fd977dcbd8e983d99560b3c
m30000| Thu Jun 14 01:34:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 25.0 } -> { _id: 30.0 }
m30000| Thu Jun 14 01:34:26 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:26-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652066772), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 25.0 }, max: { _id: 30.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 25.0 }, max: { _id: 30.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk updating self version to: 5|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:26-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652066777), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 25.0 }, max: { _id: 30.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:26 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:26 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:26-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652066778), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 25.0 }, max: { _id: 30.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:26 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25.0 }, max: { _id: 30.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_25.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:880 w:1152224 reslen:37 1016ms
m30999| Thu Jun 14 01:34:26 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 13 version: 5|1||4fd977dcbd8e983d99560b3c based on: 4|5||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:26 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 4|5||000000000000000000000000 min: { _id: 30.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:26 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 30.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 35.0 } ], shardId: "test.geo_near_random1-_id_30.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:26 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:26 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e20a38b55e5460ac23
m30001| Thu Jun 14 01:34:26 [conn4] splitChunk accepted at version 5|1||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:26-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652066782), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 30.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 30.0 }, max: { _id: 35.0 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 35.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:26 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:26 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 14 version: 5|3||4fd977dcbd8e983d99560b3c based on: 5|1||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:26 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 34.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:26 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 5|2||000000000000000000000000 min: { _id: 30.0 } max: { _id: 35.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:34:26 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 30.0 }, max: { _id: 35.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_30.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:26 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:26 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e20a38b55e5460ac24
m30001| Thu Jun 14 01:34:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:26-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652066785), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 30.0 }, max: { _id: 35.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk request accepted at version 5|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:26 [conn4] moveChunk number of documents: 5
m30002| Thu Jun 14 01:34:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 30.0 } -> { _id: 35.0 }
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 30.0 }, max: { _id: 35.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk setting version to: 6|0||4fd977dcbd8e983d99560b3c
m30002| Thu Jun 14 01:34:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 30.0 } -> { _id: 35.0 }
m30002| Thu Jun 14 01:34:27 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652067796), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 30.0 }, max: { _id: 35.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 30.0 }, max: { _id: 35.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk updating self version to: 6|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652067801), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 30.0 }, max: { _id: 35.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:27 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652067802), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 30.0 }, max: { _id: 35.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:27 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 30.0 }, max: { _id: 35.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_30.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:962 w:1152604 reslen:37 1017ms
m30999| Thu Jun 14 01:34:27 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 15 version: 6|1||4fd977dcbd8e983d99560b3c based on: 5|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:27 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 35.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 40.0 } ], shardId: "test.geo_near_random1-_id_35.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:27 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e30a38b55e5460ac25
m30999| Thu Jun 14 01:34:27 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { _id: 35.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:27 [conn4] splitChunk accepted at version 6|1||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652067806), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 35.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 35.0 }, max: { _id: 40.0 }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 40.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:27 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 16 version: 6|3||4fd977dcbd8e983d99560b3c based on: 6|1||4fd977dcbd8e983d99560b3c
m30999| Thu Jun 14 01:34:27 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 39.0 }, to: "shard0001" }
m30001| Thu Jun 14 01:34:27 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random1", keyPattern: { _id: 1.0 }, min: { _id: 40.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 45.0 } ], shardId: "test.geo_near_random1-_id_40.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:27 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e30a38b55e5460ac26
m30999| Thu Jun 14 01:34:27 [conn] splitting: test.geo_near_random1 shard: ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 6|3||000000000000000000000000 min: { _id: 40.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:27 [conn4] splitChunk accepted at version 6|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652067811), what: "split", ns: "test.geo_near_random1", details: { before: { min: { _id: 40.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 40.0 }, max: { _id: 45.0 }, lastmod: Timestamp 6000|4, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') }, right: { min: { _id: 45.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|5, lastmodEpoch: ObjectId('4fd977dcbd8e983d99560b3c') } } }
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30999| Thu Jun 14 01:34:27 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 17 version: 6|5||4fd977dcbd8e983d99560b3c based on: 6|3||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:27 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40.0 }, max: { _id: 45.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_40.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:27 [conn4] created new distributed lock for test.geo_near_random1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:27 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' acquired, ts : 4fd977e30a38b55e5460ac27
m30001| Thu Jun 14 01:34:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:27-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652067815), what: "moveChunk.start", ns: "test.geo_near_random1", details: { min: { _id: 40.0 }, max: { _id: 45.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk request accepted at version 6|5||4fd977dcbd8e983d99560b3c
m30001| Thu Jun 14 01:34:27 [conn4] moveChunk number of documents: 5
m30000| Thu Jun 14 01:34:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 40.0 } -> { _id: 45.0 }
m30999| Thu Jun 14 01:34:27 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random1", find: { _id: 44.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:27 [conn] moving chunk ns: test.geo_near_random1 moving ( ns:test.geo_near_random1 at: shard0001:localhost:30001 lastmod: 6|4||000000000000000000000000 min: { _id: 40.0 } max: { _id: 45.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:28 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 40.0 }, max: { _id: 45.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:28 [conn4] moveChunk setting version to: 7|0||4fd977dcbd8e983d99560b3c
m30000| Thu Jun 14 01:34:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random1' { _id: 40.0 } -> { _id: 45.0 }
m30000| Thu Jun 14 01:34:28 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:28-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652068828), what: "moveChunk.to", ns: "test.geo_near_random1", details: { min: { _id: 40.0 }, max: { _id: 45.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1012 } }
m30001| Thu Jun 14 01:34:28 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random1", from: "localhost:30001", min: { _id: 40.0 }, max: { _id: 45.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 250, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:28 [conn4] moveChunk updating self version to: 7|1||4fd977dcbd8e983d99560b3c through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random1'
m30001| Thu Jun 14 01:34:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:28-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652068833), what: "moveChunk.commit", ns: "test.geo_near_random1", details: { min: { _id: 40.0 }, max: { _id: 45.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:28 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:28 [conn4] moveChunk deleted: 5
m30001| Thu Jun 14 01:34:28 [conn4] distributed lock 'test.geo_near_random1/domU-12-31-39-01-70-B4:30001:1339652061:1358337428' unlocked.
m30001| Thu Jun 14 01:34:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:28-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42845", time: new Date(1339652068834), what: "moveChunk.from", ns: "test.geo_near_random1", details: { min: { _id: 40.0 }, max: { _id: 45.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:28 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40.0 }, max: { _id: 45.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random1-_id_40.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1033 w:1152854 reslen:37 1019ms
m30999| Thu Jun 14 01:34:28 [conn] ChunkManager: time to load chunks for test.geo_near_random1: 0ms sequenceNumber: 18 version: 7|1||4fd977dcbd8e983d99560b3c based on: 6|5||4fd977dcbd8e983d99560b3c
m30000| Thu Jun 14 01:34:28 [initandlisten] connection accepted from 127.0.0.1:51447 #11 (11 connections now open)
m30001| Thu Jun 14 01:34:28 [initandlisten] connection accepted from 127.0.0.1:42855 #7 (7 connections now open)
m30002| Thu Jun 14 01:34:28 [initandlisten] connection accepted from 127.0.0.1:45591 #6 (6 connections now open)
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.geo_near_random1 chunks:
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0001 { "estimate" : false, "size" : 0, "numObjects" : 0 }
{ "_id" : 0 } -->> { "_id" : 5 } on : shard0002 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 5 } -->> { "_id" : 10 } on : shard0001 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 10 } -->> { "_id" : 15 } on : shard0000 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 15 } -->> { "_id" : 20 } on : shard0002 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 20 } -->> { "_id" : 25 } on : shard0001 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 25 } -->> { "_id" : 30 } on : shard0000 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 30 } -->> { "_id" : 35 } on : shard0002 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 35 } -->> { "_id" : 40 } on : shard0001 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 40 } -->> { "_id" : 45 } on : shard0000 { "estimate" : false, "size" : 260, "numObjects" : 5 }
{ "_id" : 45 } -->> { "_id" : { $maxKey : 1 } } on : shard0001 { "estimate" : false, "size" : 260, "numObjects" : 5 }
testing point: [ 0, 0 ] opts: { "sharded" : true, "sphere" : 0, "nToTest" : 50 }
testing point: [ -76.107116267737, -52.88817035034299 ] opts: { "sharded" : true, "sphere" : 0, "nToTest" : 50 }
testing point: [ 84.24053569333628, 19.137459313496947 ] opts: { "sharded" : true, "sphere" : 0, "nToTest" : 50 }
testing point: [ -5.0725878230296075, 71.2684281449765 ] opts: { "sharded" : true, "sphere" : 0, "nToTest" : 50 }
testing point: [ -113.95575480377302, 8.089775992557406 ] opts: { "sharded" : true, "sphere" : 0, "nToTest" : 50 }
m30999| Thu Jun 14 01:34:29 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:34:29 [conn5] end connection 127.0.0.1:51431 (10 connections now open)
m30000| Thu Jun 14 01:34:29 [conn3] end connection 127.0.0.1:51427 (10 connections now open)
m30000| Thu Jun 14 01:34:29 [conn6] end connection 127.0.0.1:51435 (8 connections now open)
m30001| Thu Jun 14 01:34:29 [conn3] end connection 127.0.0.1:42843 (6 connections now open)
m30001| Thu Jun 14 01:34:29 [conn4] end connection 127.0.0.1:42845 (6 connections now open)
m30002| Thu Jun 14 01:34:29 [conn3] end connection 127.0.0.1:45579 (5 connections now open)
m30002| Thu Jun 14 01:34:29 [conn4] end connection 127.0.0.1:45581 (4 connections now open)
Thu Jun 14 01:34:30 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:30 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:30 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:30 dbexit:
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:30 [interruptThread] closing listening socket: 26
m30000| Thu Jun 14 01:34:30 [interruptThread] closing listening socket: 27
m30000| Thu Jun 14 01:34:30 [interruptThread] closing listening socket: 28
m30000| Thu Jun 14 01:34:30 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:30 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:30 [conn6] end connection 127.0.0.1:42852 (4 connections now open)
m30000| Thu Jun 14 01:34:30 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:34:30 [conn10] end connection 127.0.0.1:51446 (7 connections now open)
m30000| Thu Jun 14 01:34:30 dbexit: really exiting now
Thu Jun 14 01:34:31 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:31 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:31 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:31 dbexit:
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:31 [interruptThread] closing listening socket: 29
m30001| Thu Jun 14 01:34:31 [interruptThread] closing listening socket: 30
m30001| Thu Jun 14 01:34:31 [interruptThread] closing listening socket: 31
m30001| Thu Jun 14 01:34:31 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:34:31 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:31 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:31 dbexit: really exiting now
m30002| Thu Jun 14 01:34:31 [conn5] end connection 127.0.0.1:45584 (3 connections now open)
Thu Jun 14 01:34:32 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:34:32 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:34:32 [interruptThread] now exiting
m30002| Thu Jun 14 01:34:32 dbexit:
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:34:32 [interruptThread] closing listening socket: 32
m30002| Thu Jun 14 01:34:32 [interruptThread] closing listening socket: 33
m30002| Thu Jun 14 01:34:32 [interruptThread] closing listening socket: 34
m30002| Thu Jun 14 01:34:32 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:34:32 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:34:32 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:34:32 dbexit: really exiting now
Thu Jun 14 01:34:33 shell: stopped mongo program on port 30002
*** ShardingTest geo_near_random1 completed successfully in 14.557 seconds ***
14673.245907ms
Thu Jun 14 01:34:33 [initandlisten] connection accepted from 127.0.0.1:54953 #31 (18 connections now open)
*******************************************
Test : geo_near_random2.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/geo_near_random2.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/geo_near_random2.js";TestData.testFile = "geo_near_random2.js";TestData.testName = "geo_near_random2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:34:33 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/geo_near_random20'
Thu Jun 14 01:34:33 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/geo_near_random20
m30000| Thu Jun 14 01:34:33
m30000| Thu Jun 14 01:34:33 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:34:33
m30000| Thu Jun 14 01:34:33 [initandlisten] MongoDB starting : pid=24924 port=30000 dbpath=/data/db/geo_near_random20 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:34:33 [initandlisten]
m30000| Thu Jun 14 01:34:33 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:34:33 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:34:33 [initandlisten]
m30000| Thu Jun 14 01:34:33 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:34:33 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:34:33 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:34:33 [initandlisten]
m30000| Thu Jun 14 01:34:33 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:34:33 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:33 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:34:33 [initandlisten] options: { dbpath: "/data/db/geo_near_random20", port: 30000 }
m30000| Thu Jun 14 01:34:33 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:34:33 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/geo_near_random21'
m30000| Thu Jun 14 01:34:34 [initandlisten] connection accepted from 127.0.0.1:51452 #1 (1 connection now open)
Thu Jun 14 01:34:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/geo_near_random21
m30001| Thu Jun 14 01:34:34
m30001| Thu Jun 14 01:34:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:34:34
m30001| Thu Jun 14 01:34:34 [initandlisten] MongoDB starting : pid=24937 port=30001 dbpath=/data/db/geo_near_random21 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:34:34 [initandlisten]
m30001| Thu Jun 14 01:34:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:34:34 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:34:34 [initandlisten]
m30001| Thu Jun 14 01:34:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:34:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:34:34 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:34:34 [initandlisten]
m30001| Thu Jun 14 01:34:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:34:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:34:34 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:34:34 [initandlisten] options: { dbpath: "/data/db/geo_near_random21", port: 30001 }
m30001| Thu Jun 14 01:34:34 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:34:34 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/geo_near_random22'
m30001| Thu Jun 14 01:34:34 [initandlisten] connection accepted from 127.0.0.1:42861 #1 (1 connection now open)
Thu Jun 14 01:34:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/geo_near_random22
m30002| Thu Jun 14 01:34:34
m30002| Thu Jun 14 01:34:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:34:34
m30002| Thu Jun 14 01:34:34 [initandlisten] MongoDB starting : pid=24950 port=30002 dbpath=/data/db/geo_near_random22 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:34:34 [initandlisten]
m30002| Thu Jun 14 01:34:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:34:34 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:34:34 [initandlisten]
m30002| Thu Jun 14 01:34:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:34:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:34:34 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:34:34 [initandlisten]
m30002| Thu Jun 14 01:34:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:34:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:34:34 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:34:34 [initandlisten] options: { dbpath: "/data/db/geo_near_random22", port: 30002 }
m30002| Thu Jun 14 01:34:34 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:34:34 [websvr] admin web console waiting for connections on port 31002
"localhost:30000"
ShardingTest geo_near_random2 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:34:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:34:34 [initandlisten] connection accepted from 127.0.0.1:51457 #2 (2 connections now open)
m30002| Thu Jun 14 01:34:34 [initandlisten] connection accepted from 127.0.0.1:45598 #1 (1 connection now open)
m30999| Thu Jun 14 01:34:34 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:34:34 [mongosMain] MongoS version 2.1.2-pre- starting: pid=24963 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:34:34 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:34 [FileAllocator] allocating new datafile /data/db/geo_near_random20/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:34:34 [FileAllocator] creating directory /data/db/geo_near_random20/_tmp
m30999| Thu Jun 14 01:34:34 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:34:34 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:34:34 [initandlisten] connection accepted from 127.0.0.1:51459 #3 (3 connections now open)
m30000| Thu Jun 14 01:34:34 [FileAllocator] done allocating datafile /data/db/geo_near_random20/config.ns, size: 16MB, took 0.267 secs
m30000| Thu Jun 14 01:34:34 [FileAllocator] allocating new datafile /data/db/geo_near_random20/config.0, filling with zeroes...
m30000| Thu Jun 14 01:34:35 [FileAllocator] done allocating datafile /data/db/geo_near_random20/config.0, size: 16MB, took 0.281 secs
m30000| Thu Jun 14 01:34:35 [FileAllocator] allocating new datafile /data/db/geo_near_random20/config.1, filling with zeroes...
m30000| Thu Jun 14 01:34:35 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn2] insert config.settings keyUpdates:0 locks(micros) w:567383 567ms
m30000| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:51462 #4 (4 connections now open)
m30000| Thu Jun 14 01:34:35 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:34:35 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:34:35 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:34:35 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:35 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:34:35 [websvr] admin web console waiting for connections on port 31999
m30000| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:51463 #5 (5 connections now open)
m30999| Thu Jun 14 01:34:35 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:34:35 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:34:35 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:34:35
m30999| Thu Jun 14 01:34:35 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:34:35 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:51464 #6 (6 connections now open)
m30000| Thu Jun 14 01:34:35 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn5] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [conn5] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:34:35 [conn5] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:34:35 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652075:1804289383' acquired, ts : 4fd977eb87332556a10ca98a
m30999| Thu Jun 14 01:34:35 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652075:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:34:35 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652075:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:34:35 [mongosMain] connection accepted from 127.0.0.1:43444 #1 (1 connection now open)
m30999| Thu Jun 14 01:34:35 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:34:35 [conn5] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:35 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:34:35 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:42873 #2 (2 connections now open)
m30999| Thu Jun 14 01:34:35 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:45609 #2 (2 connections now open)
m30999| Thu Jun 14 01:34:35 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30000| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:51468 #7 (7 connections now open)
m30999| Thu Jun 14 01:34:35 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977eb87332556a10ca989
m30999| Thu Jun 14 01:34:35 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977eb87332556a10ca989
m30001| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:42876 #3 (3 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30002| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:45612 #3 (3 connections now open)
m30999| Thu Jun 14 01:34:35 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd977eb87332556a10ca989
m30999| Thu Jun 14 01:34:35 [conn] couldn't find database [test] in config db
m30001| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:42878 #4 (4 connections now open)
m30002| Thu Jun 14 01:34:35 [initandlisten] connection accepted from 127.0.0.1:45614 #4 (4 connections now open)
m30999| Thu Jun 14 01:34:35 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:34:35 [conn] DROP: test.geo_near_random2
m30001| Thu Jun 14 01:34:35 [conn3] CMD: drop test.geo_near_random2
starting test: geo_near_random2
m30999| Thu Jun 14 01:34:35 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:34:35 [conn] CMD: shardcollection: { shardcollection: "test.geo_near_random2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:34:35 [conn] enable sharding on: test.geo_near_random2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:34:35 [conn] going to create 1 chunk(s) for: test.geo_near_random2 using new epoch 4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:35 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 2 version: 1|0||4fd977eb87332556a10ca98b based on: (empty)
m30999| Thu Jun 14 01:34:35 [conn] resetting shard version of test.geo_near_random2 on localhost:30000, version is zero
m30001| Thu Jun 14 01:34:35 [FileAllocator] allocating new datafile /data/db/geo_near_random21/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:34:35 [FileAllocator] creating directory /data/db/geo_near_random21/_tmp
m30000| Thu Jun 14 01:34:35 [conn5] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:34:35 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:35 [FileAllocator] done allocating datafile /data/db/geo_near_random20/config.1, size: 32MB, took 0.579 secs
m30001| Thu Jun 14 01:34:35 [FileAllocator] done allocating datafile /data/db/geo_near_random21/test.ns, size: 16MB, took 0.324 secs
m30001| Thu Jun 14 01:34:35 [FileAllocator] allocating new datafile /data/db/geo_near_random21/test.0, filling with zeroes...
m30001| Thu Jun 14 01:34:36 [FileAllocator] done allocating datafile /data/db/geo_near_random21/test.0, size: 16MB, took 0.367 secs
m30001| Thu Jun 14 01:34:36 [conn4] build index test.geo_near_random2 { _id: 1 }
m30001| Thu Jun 14 01:34:36 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:34:36 [conn4] info: creating collection test.geo_near_random2 on add index
m30001| Thu Jun 14 01:34:36 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) r:299 w:1144759 1144ms
m30001| Thu Jun 14 01:34:36 [conn3] command admin.$cmd command: { setShardVersion: "test.geo_near_random2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd977eb87332556a10ca98b'), serverID: ObjectId('4fd977eb87332556a10ca989'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:18715 w:163 reslen:199 1142ms
m30001| Thu Jun 14 01:34:36 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:34:36 [initandlisten] connection accepted from 127.0.0.1:51473 #8 (8 connections now open)
m30001| Thu Jun 14 01:34:36 [FileAllocator] allocating new datafile /data/db/geo_near_random21/test.1, filling with zeroes...
m30999| Thu Jun 14 01:34:36 [conn] resetting shard version of test.geo_near_random2 on localhost:30002, version is zero
m30999| Thu Jun 14 01:34:36 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 3 version: 1|2||4fd977eb87332556a10ca98b based on: 1|0||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:36 [conn] autosplitted test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30000| Thu Jun 14 01:34:36 [initandlisten] connection accepted from 127.0.0.1:51474 #9 (9 connections now open)
m30000| Thu Jun 14 01:34:36 [initandlisten] connection accepted from 127.0.0.1:51475 #10 (10 connections now open)
m30001| Thu Jun 14 01:34:36 [conn4] request split points lookup for chunk test.geo_near_random2 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:36 [conn4] max number of requested split points reached (2) before the end of chunk test.geo_near_random2 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:34:36 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.geo_near_random2-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:36 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:36 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ec18792c4099d87eeb
m30001| Thu Jun 14 01:34:36 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652076:668492239 (sleeping for 30000ms)
m30001| Thu Jun 14 01:34:36 [conn4] splitChunk accepted at version 1|0||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:36-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652076348), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:36 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:37 [FileAllocator] done allocating datafile /data/db/geo_near_random21/test.1, size: 32MB, took 0.696 secs
m30001| Thu Jun 14 01:34:37 [conn3] build index test.geo_near_random2 { loc: "2d" }
m30001| Thu Jun 14 01:34:37 [conn3] build index done. scanned 5000 total records. 0.021 secs
m30999| Thu Jun 14 01:34:37 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:37 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 500.0 } ], shardId: "test.geo_near_random2-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:37 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:37 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ed18792c4099d87eec
m30001| Thu Jun 14 01:34:37 [conn4] splitChunk accepted at version 1|2||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:37-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652077268), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 500.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:37 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:37 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 4 version: 1|4||4fd977eb87332556a10ca98b based on: 1|2||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:37 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 499.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:37 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 500.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30002| Thu Jun 14 01:34:37 [initandlisten] connection accepted from 127.0.0.1:45618 #5 (5 connections now open)
m30001| Thu Jun 14 01:34:37 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:37 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:37 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ed18792c4099d87eed
m30001| Thu Jun 14 01:34:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:37-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652077271), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 0.0 }, max: { _id: 500.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:37 [conn4] moveChunk request accepted at version 1|4||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:37 [conn4] moveChunk number of documents: 500
m30002| Thu Jun 14 01:34:37 [FileAllocator] allocating new datafile /data/db/geo_near_random22/test.ns, filling with zeroes...
m30002| Thu Jun 14 01:34:37 [FileAllocator] creating directory /data/db/geo_near_random22/_tmp
m30001| Thu Jun 14 01:34:37 [initandlisten] connection accepted from 127.0.0.1:42884 #5 (5 connections now open)
m30002| Thu Jun 14 01:34:37 [FileAllocator] done allocating datafile /data/db/geo_near_random22/test.ns, size: 16MB, took 0.245 secs
m30002| Thu Jun 14 01:34:37 [FileAllocator] allocating new datafile /data/db/geo_near_random22/test.0, filling with zeroes...
m30002| Thu Jun 14 01:34:37 [FileAllocator] done allocating datafile /data/db/geo_near_random22/test.0, size: 16MB, took 0.306 secs
m30002| Thu Jun 14 01:34:37 [FileAllocator] allocating new datafile /data/db/geo_near_random22/test.1, filling with zeroes...
m30002| Thu Jun 14 01:34:37 [migrateThread] build index test.geo_near_random2 { _id: 1 }
m30002| Thu Jun 14 01:34:37 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:34:37 [migrateThread] info: creating collection test.geo_near_random2 on add index
m30002| Thu Jun 14 01:34:37 [migrateThread] build index test.geo_near_random2 { loc: "2d" }
m30002| Thu Jun 14 01:34:37 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:34:37 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 0.0 } -> { _id: 500.0 }
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk setting version to: 2|0||4fd977eb87332556a10ca98b
m30002| Thu Jun 14 01:34:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 0.0 } -> { _id: 500.0 }
m30002| Thu Jun 14 01:34:38 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652078281), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 0.0 }, max: { _id: 500.0 }, step1 of 5: 566, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 418 } }
m30000| Thu Jun 14 01:34:38 [initandlisten] connection accepted from 127.0.0.1:51478 #11 (11 connections now open)
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk updating self version to: 2|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652078285), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 0.0 }, max: { _id: 500.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:38 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652078313), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 0.0 }, max: { _id: 500.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 1004, step5 of 6: 8, step6 of 6: 26 } }
m30001| Thu Jun 14 01:34:38 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1472 w:1168786 reslen:37 1042ms
m30999| Thu Jun 14 01:34:38 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 5 version: 2|1||4fd977eb87332556a10ca98b based on: 1|4||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:38 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 500.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:38 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 500.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1000.0 } ], shardId: "test.geo_near_random2-_id_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:38 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ee18792c4099d87eee
m30001| Thu Jun 14 01:34:38 [conn4] splitChunk accepted at version 2|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652078316), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 500.0 }, max: { _id: 1000.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:38 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 6 version: 2|3||4fd977eb87332556a10ca98b based on: 2|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:38 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 999.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:34:38 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: 1000.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:38 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 1000.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1500.0 } ], shardId: "test.geo_near_random2-_id_1000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:38 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ee18792c4099d87eef
m30001| Thu Jun 14 01:34:38 [conn4] splitChunk accepted at version 2|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652078325), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1000.0 }, max: { _id: 1500.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:38 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 7 version: 2|5||4fd977eb87332556a10ca98b based on: 2|3||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:38 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 1499.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:38 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { _id: 1000.0 } max: { _id: 1500.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:38 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1000.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_1000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:38 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:38 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ee18792c4099d87ef0
m30001| Thu Jun 14 01:34:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:38-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652078328), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 1000.0 }, max: { _id: 1500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk request accepted at version 2|5||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:38 [conn4] moveChunk number of documents: 500
m30001| Thu Jun 14 01:34:38 [initandlisten] connection accepted from 127.0.0.1:42886 #6 (6 connections now open)
m30000| Thu Jun 14 01:34:38 [FileAllocator] allocating new datafile /data/db/geo_near_random20/test.ns, filling with zeroes...
m30002| Thu Jun 14 01:34:38 [FileAllocator] done allocating datafile /data/db/geo_near_random22/test.1, size: 32MB, took 0.678 secs
m30000| Thu Jun 14 01:34:38 [FileAllocator] done allocating datafile /data/db/geo_near_random20/test.ns, size: 16MB, took 0.488 secs
m30000| Thu Jun 14 01:34:38 [FileAllocator] allocating new datafile /data/db/geo_near_random20/test.0, filling with zeroes...
m30000| Thu Jun 14 01:34:39 [FileAllocator] done allocating datafile /data/db/geo_near_random20/test.0, size: 16MB, took 0.379 secs
m30000| Thu Jun 14 01:34:39 [FileAllocator] allocating new datafile /data/db/geo_near_random20/test.1, filling with zeroes...
m30000| Thu Jun 14 01:34:39 [migrateThread] build index test.geo_near_random2 { _id: 1 }
m30000| Thu Jun 14 01:34:39 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:39 [migrateThread] info: creating collection test.geo_near_random2 on add index
m30000| Thu Jun 14 01:34:39 [migrateThread] build index test.geo_near_random2 { loc: "2d" }
m30000| Thu Jun 14 01:34:39 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 1000.0 } -> { _id: 1500.0 }
m30000| Thu Jun 14 01:34:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 1000.0 } -> { _id: 1500.0 }
m30000| Thu Jun 14 01:34:39 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:39-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652079341), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 1000.0 }, max: { _id: 1500.0 }, step1 of 5: 878, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 109 } }
m30000| Thu Jun 14 01:34:39 [initandlisten] connection accepted from 127.0.0.1:51480 #12 (12 connections now open)
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 1000.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk setting version to: 3|0||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 1000.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk updating self version to: 3|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:39-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652079346), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 1000.0 }, max: { _id: 1500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:39 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:39 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:39-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652079372), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 1000.0 }, max: { _id: 1500.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 1009, step5 of 6: 6, step6 of 6: 25 } }
m30001| Thu Jun 14 01:34:39 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1000.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_1000.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2305 w:1192095 reslen:37 1045ms
m30999| Thu Jun 14 01:34:39 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 8 version: 3|1||4fd977eb87332556a10ca98b based on: 2|5||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:39 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { _id: 1500.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:39 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 1500.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2000.0 } ], shardId: "test.geo_near_random2-_id_1500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:39 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:39 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ef18792c4099d87ef1
m30001| Thu Jun 14 01:34:39 [conn4] splitChunk accepted at version 3|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:39-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652079376), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1500.0 }, max: { _id: 2000.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 2000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:39 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:39 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 9 version: 3|3||4fd977eb87332556a10ca98b based on: 3|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:39 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 1999.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:39 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 3|2||000000000000000000000000 min: { _id: 1500.0 } max: { _id: 2000.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:34:39 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 1500.0 }, max: { _id: 2000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_1500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:39 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:39 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977ef18792c4099d87ef2
m30001| Thu Jun 14 01:34:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:39-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652079379), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 1500.0 }, max: { _id: 2000.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk request accepted at version 3|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:39 [conn4] moveChunk number of documents: 500
m30002| Thu Jun 14 01:34:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 1500.0 } -> { _id: 2000.0 }
m30000| Thu Jun 14 01:34:39 [FileAllocator] done allocating datafile /data/db/geo_near_random20/test.1, size: 32MB, took 0.723 secs
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 1500.0 }, max: { _id: 2000.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk setting version to: 4|0||4fd977eb87332556a10ca98b
m30002| Thu Jun 14 01:34:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 1500.0 } -> { _id: 2000.0 }
m30002| Thu Jun 14 01:34:40 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652080389), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 1500.0 }, max: { _id: 2000.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 24, step4 of 5: 0, step5 of 5: 983 } }
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 1500.0 }, max: { _id: 2000.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk updating self version to: 4|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652080394), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 1500.0 }, max: { _id: 2000.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:40 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652080419), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 1500.0 }, max: { _id: 2000.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 25 } }
m30001| Thu Jun 14 01:34:40 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 1500.0 }, max: { _id: 2000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_1500.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3126 w:1214732 reslen:37 1040ms
m30999| Thu Jun 14 01:34:40 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 10 version: 4|1||4fd977eb87332556a10ca98b based on: 3|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:40 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 2000.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2500.0 } ], shardId: "test.geo_near_random2-_id_2000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:40 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:40 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { _id: 2000.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f018792c4099d87ef3
m30001| Thu Jun 14 01:34:40 [conn4] splitChunk accepted at version 4|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652080423), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 2000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2000.0 }, max: { _id: 2500.0 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 2500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:40 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 11 version: 4|3||4fd977eb87332556a10ca98b based on: 4|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:40 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 2499.0 }, to: "shard0001" }
m30001| Thu Jun 14 01:34:40 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 2500.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 3000.0 } ], shardId: "test.geo_near_random2-_id_2500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:40 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:40 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 4|3||000000000000000000000000 min: { _id: 2500.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f018792c4099d87ef4
m30001| Thu Jun 14 01:34:40 [conn4] splitChunk accepted at version 4|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652080428), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 2500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2500.0 }, max: { _id: 3000.0 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 3000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:40 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 12 version: 4|5||4fd977eb87332556a10ca98b based on: 4|3||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:40 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 2999.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:40 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 4|4||000000000000000000000000 min: { _id: 2500.0 } max: { _id: 3000.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:40 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2500.0 }, max: { _id: 3000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_2500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:40 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:40 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f018792c4099d87ef5
m30001| Thu Jun 14 01:34:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:40-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652080431), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 2500.0 }, max: { _id: 3000.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk request accepted at version 4|5||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:40 [conn4] moveChunk number of documents: 500
m30000| Thu Jun 14 01:34:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 2500.0 } -> { _id: 3000.0 }
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 2500.0 }, max: { _id: 3000.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk setting version to: 5|0||4fd977eb87332556a10ca98b
m30000| Thu Jun 14 01:34:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 2500.0 } -> { _id: 3000.0 }
m30000| Thu Jun 14 01:34:41 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:41-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652081441), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 2500.0 }, max: { _id: 3000.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 985 } }
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 2500.0 }, max: { _id: 3000.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk updating self version to: 5|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:41-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652081446), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 2500.0 }, max: { _id: 3000.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:41 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:41 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:41-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652081471), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 2500.0 }, max: { _id: 3000.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 24 } }
m30001| Thu Jun 14 01:34:41 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2500.0 }, max: { _id: 3000.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_2500.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3954 w:1236897 reslen:37 1041ms
m30999| Thu Jun 14 01:34:41 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 13 version: 5|1||4fd977eb87332556a10ca98b based on: 4|5||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:41 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 3000.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 3500.0 } ], shardId: "test.geo_near_random2-_id_3000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:41 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:41 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 4|5||000000000000000000000000 min: { _id: 3000.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:41 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f118792c4099d87ef6
m30001| Thu Jun 14 01:34:41 [conn4] splitChunk accepted at version 5|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:41-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652081476), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 3000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3000.0 }, max: { _id: 3500.0 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 3500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:41 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:41 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 14 version: 5|3||4fd977eb87332556a10ca98b based on: 5|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:41 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 3499.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:34:41 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 5|2||000000000000000000000000 min: { _id: 3000.0 } max: { _id: 3500.0 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:34:41 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 3000.0 }, max: { _id: 3500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_3000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:41 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:41 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f118792c4099d87ef7
m30001| Thu Jun 14 01:34:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:41-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652081479), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 3000.0 }, max: { _id: 3500.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk request accepted at version 5|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:41 [conn4] moveChunk number of documents: 500
m30002| Thu Jun 14 01:34:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 3000.0 } -> { _id: 3500.0 }
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 3000.0 }, max: { _id: 3500.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk setting version to: 6|0||4fd977eb87332556a10ca98b
m30002| Thu Jun 14 01:34:42 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 3000.0 } -> { _id: 3500.0 }
m30002| Thu Jun 14 01:34:42 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652082501), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 3000.0 }, max: { _id: 3500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 3000.0 }, max: { _id: 3500.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk updating self version to: 6|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652082506), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 3000.0 }, max: { _id: 3500.0 }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:34:42 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652082530), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 3000.0 }, max: { _id: 3500.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 8, step4 of 6: 1001, step5 of 6: 16, step6 of 6: 23 } }
m30001| Thu Jun 14 01:34:42 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: 3000.0 }, max: { _id: 3500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_3000.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:4775 w:1258085 reslen:37 1051ms
m30999| Thu Jun 14 01:34:42 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 15 version: 6|1||4fd977eb87332556a10ca98b based on: 5|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:42 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 3500.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 4000.0 } ], shardId: "test.geo_near_random2-_id_3500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:42 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:42 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { _id: 3500.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f218792c4099d87ef8
m30001| Thu Jun 14 01:34:42 [conn4] splitChunk accepted at version 6|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652082535), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 3500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3500.0 }, max: { _id: 4000.0 }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 4000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:42 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 16 version: 6|3||4fd977eb87332556a10ca98b based on: 6|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:42 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 3999.0 }, to: "shard0001" }
m30001| Thu Jun 14 01:34:42 [conn4] received splitChunk request: { splitChunk: "test.geo_near_random2", keyPattern: { _id: 1.0 }, min: { _id: 4000.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 4500.0 } ], shardId: "test.geo_near_random2-_id_4000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:42 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:42 [conn] splitting: test.geo_near_random2 shard: ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 6|3||000000000000000000000000 min: { _id: 4000.0 } max: { _id: MaxKey }
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f218792c4099d87ef9
m30001| Thu Jun 14 01:34:42 [conn4] splitChunk accepted at version 6|3||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652082540), what: "split", ns: "test.geo_near_random2", details: { before: { min: { _id: 4000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4000.0 }, max: { _id: 4500.0 }, lastmod: Timestamp 6000|4, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') }, right: { min: { _id: 4500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|5, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b') } } }
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30999| Thu Jun 14 01:34:42 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 17 version: 6|5||4fd977eb87332556a10ca98b based on: 6|3||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:42 [conn] CMD: movechunk: { moveChunk: "test.geo_near_random2", find: { _id: 4499.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:34:42 [conn] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 6|4||000000000000000000000000 min: { _id: 4000.0 } max: { _id: 4500.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:34:42 [conn4] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 4000.0 }, max: { _id: 4500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_4000.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:42 [conn4] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:42 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f218792c4099d87efa
m30001| Thu Jun 14 01:34:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:42-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652082543), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: 4000.0 }, max: { _id: 4500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk request accepted at version 6|5||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:42 [conn4] moveChunk number of documents: 500
m30000| Thu Jun 14 01:34:42 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 4000.0 } -> { _id: 4500.0 }
m30001| Thu Jun 14 01:34:43 [conn4] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 4000.0 }, max: { _id: 4500.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:43 [conn4] moveChunk setting version to: 7|0||4fd977eb87332556a10ca98b
m30000| Thu Jun 14 01:34:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: 4000.0 } -> { _id: 4500.0 }
m30000| Thu Jun 14 01:34:43 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:43-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652083549), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: 4000.0 }, max: { _id: 4500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 980 } }
m30001| Thu Jun 14 01:34:43 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: 4000.0 }, max: { _id: 4500.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 500, clonedBytes: 25000, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:43 [conn4] moveChunk updating self version to: 7|1||4fd977eb87332556a10ca98b through { _id: MinKey } -> { _id: 0.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:43-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652083554), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: 4000.0 }, max: { _id: 4500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:43 [conn4] doing delete inline
m30001| Thu Jun 14 01:34:43 [conn4] moveChunk deleted: 500
m30001| Thu Jun 14 01:34:43 [conn4] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:43-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42878", time: new Date(1339652083578), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: 4000.0 }, max: { _id: 4500.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 8, step6 of 6: 23 } }
m30001| Thu Jun 14 01:34:43 [conn4] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 4000.0 }, max: { _id: 4500.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_4000.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:5597 w:1279442 reslen:37 1035ms
m30999| Thu Jun 14 01:34:43 [conn] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 18 version: 7|1||4fd977eb87332556a10ca98b based on: 6|5||4fd977eb87332556a10ca98b
m30000| Thu Jun 14 01:34:43 [initandlisten] connection accepted from 127.0.0.1:51481 #13 (13 connections now open)
m30001| Thu Jun 14 01:34:43 [initandlisten] connection accepted from 127.0.0.1:42889 #7 (7 connections now open)
m30002| Thu Jun 14 01:34:43 [initandlisten] connection accepted from 127.0.0.1:45625 #6 (6 connections now open)
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.geo_near_random2 chunks:
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0001 { "estimate" : false, "size" : 0, "numObjects" : 0 }
{ "_id" : 0 } -->> { "_id" : 500 } on : shard0002 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 500 } -->> { "_id" : 1000 } on : shard0001 { "estimate" : false, "size" : 26020, "numObjects" : 500 }
{ "_id" : 1000 } -->> { "_id" : 1500 } on : shard0000 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 1500 } -->> { "_id" : 2000 } on : shard0002 { "estimate" : false, "size" : 26020, "numObjects" : 500 }
{ "_id" : 2000 } -->> { "_id" : 2500 } on : shard0001 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 2500 } -->> { "_id" : 3000 } on : shard0000 { "estimate" : false, "size" : 26020, "numObjects" : 500 }
{ "_id" : 3000 } -->> { "_id" : 3500 } on : shard0002 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 3500 } -->> { "_id" : 4000 } on : shard0001 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 4000 } -->> { "_id" : 4500 } on : shard0000 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
{ "_id" : 4500 } -->> { "_id" : { $maxKey : 1 } } on : shard0001 { "estimate" : false, "size" : 26000, "numObjects" : 500 }
testing point: [ 0, 0 ] opts: { "sphere" : 0, "nToTest" : 50, "sharded" : true }
testing point: [ 177.39256076030435, 73.05581317283213 ] opts: { "sphere" : 0, "nToTest" : 50, "sharded" : true }
testing point: [ -67.29340393785388, -34.8961163777858 ] opts: { "sphere" : 0, "nToTest" : 50, "sharded" : true }
testing point: [ 85.15901549812409, -57.35448229126632 ] opts: { "sphere" : 0, "nToTest" : 50, "sharded" : true }
testing point: [ -108.72760251741856, 24.111320385709405 ] opts: { "sphere" : 0, "nToTest" : 50, "sharded" : true }
testing point: [ 0, 0 ] opts: { "sphere" : 1, "nToTest" : 50, "sharded" : true }
m30000| Thu Jun 14 01:34:45 [initandlisten] connection accepted from 127.0.0.1:51484 #14 (14 connections now open)
m30000| Thu Jun 14 01:34:45 [initandlisten] connection accepted from 127.0.0.1:51485 #15 (15 connections now open)
m30999| Thu Jun 14 01:34:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652075:1804289383' acquired, ts : 4fd977f587332556a10ca98c
m30999| Thu Jun 14 01:34:45 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:34:45 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:34:45 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:34:45 [Balancer] shard0002 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:34:45 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:34:45 [Balancer] shard0000
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_1000.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 1000.0 }, max: { _id: 1500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_2500.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 2500.0 }, max: { _id: 3000.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_4000.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 4000.0 }, max: { _id: 4500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:34:45 [Balancer] shard0001
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_MinKey", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_500.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 500.0 }, max: { _id: 1000.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_2000.0", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 2000.0 }, max: { _id: 2500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_3500.0", lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 3500.0 }, max: { _id: 4000.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_4500.0", lastmod: Timestamp 6000|5, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 4500.0 }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] shard0002
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 0.0 }, max: { _id: 500.0 }, shard: "shard0002" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_1500.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 1500.0 }, max: { _id: 2000.0 }, shard: "shard0002" }
m30999| Thu Jun 14 01:34:45 [Balancer] { _id: "test.geo_near_random2-_id_3000.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: 3000.0 }, max: { _id: 3500.0 }, shard: "shard0002" }
m30999| Thu Jun 14 01:34:45 [Balancer] ----
m30999| Thu Jun 14 01:34:45 [Balancer] chose [shard0001] to [shard0000] { _id: "test.geo_near_random2-_id_MinKey", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd977eb87332556a10ca98b'), ns: "test.geo_near_random2", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:34:45 [Balancer] moving chunk ns: test.geo_near_random2 moving ( ns:test.geo_near_random2 at: shard0001:localhost:30001 lastmod: 7|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:34:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: MinKey } -> { _id: 0.0 }
m30001| Thu Jun 14 01:34:45 [initandlisten] connection accepted from 127.0.0.1:42893 #8 (8 connections now open)
m30001| Thu Jun 14 01:34:45 [conn8] received moveChunk request: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:34:45 [conn8] created new distributed lock for test.geo_near_random2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:34:45 [conn8] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' acquired, ts : 4fd977f518792c4099d87efb
m30001| Thu Jun 14 01:34:45 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:45-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42893", time: new Date(1339652085099), what: "moveChunk.start", ns: "test.geo_near_random2", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:45 [conn8] moveChunk request accepted at version 7|1||4fd977eb87332556a10ca98b
m30001| Thu Jun 14 01:34:45 [conn8] moveChunk number of documents: 0
testing point: [ 5.723846196010709, -39.399662643671036 ] opts: { "sphere" : 1, "nToTest" : 50, "sharded" : true }
m30001| Thu Jun 14 01:34:46 [conn8] moveChunk data transfer progress: { active: true, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:34:46 [conn8] moveChunk setting version to: 8|0||4fd977eb87332556a10ca98b
m30000| Thu Jun 14 01:34:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.geo_near_random2' { _id: MinKey } -> { _id: 0.0 }
m30000| Thu Jun 14 01:34:46 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:46-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652086114), what: "moveChunk.to", ns: "test.geo_near_random2", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 4, step4 of 5: 0, step5 of 5: 1008 } }
m30999| Thu Jun 14 01:34:46 [Balancer] ChunkManager: time to load chunks for test.geo_near_random2: 0ms sequenceNumber: 19 version: 8|1||4fd977eb87332556a10ca98b based on: 7|1||4fd977eb87332556a10ca98b
m30999| Thu Jun 14 01:34:46 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652075:1804289383' unlocked.
m30001| Thu Jun 14 01:34:46 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.geo_near_random2", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:34:46 [conn8] moveChunk updating self version to: 8|1||4fd977eb87332556a10ca98b through { _id: 500.0 } -> { _id: 1000.0 } for collection 'test.geo_near_random2'
m30001| Thu Jun 14 01:34:46 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:46-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42893", time: new Date(1339652086118), what: "moveChunk.commit", ns: "test.geo_near_random2", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:34:46 [conn8] doing delete inline
m30001| Thu Jun 14 01:34:46 [conn8] moveChunk deleted: 0
m30001| Thu Jun 14 01:34:46 [conn8] distributed lock 'test.geo_near_random2/domU-12-31-39-01-70-B4:30001:1339652076:668492239' unlocked.
m30001| Thu Jun 14 01:34:46 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:34:46-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42893", time: new Date(1339652086123), what: "moveChunk.from", ns: "test.geo_near_random2", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 4, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 15, step6 of 6: 0 } }
m30001| Thu Jun 14 01:34:46 [conn8] command admin.$cmd command: { moveChunk: "test.geo_near_random2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.geo_near_random2-_id_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:62 w:55 reslen:37 1027ms
testing point: [ -3.535750275850296, -62.34947331994772 ] opts: { "sphere" : 1, "nToTest" : 50, "sharded" : true }
testing point: [ 67.4604580964148, -60.43469586968422 ] opts: { "sphere" : 1, "nToTest" : 50, "sharded" : true }
testing point: [ 119.3031469411403, -71.59176651388407 ] opts: { "sphere" : 1, "nToTest" : 50, "sharded" : true }
m30999| Thu Jun 14 01:34:48 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30002| Thu Jun 14 01:34:48 [conn3] end connection 127.0.0.1:45612 (5 connections now open)
m30002| Thu Jun 14 01:34:48 [conn4] end connection 127.0.0.1:45614 (4 connections now open)
m30001| Thu Jun 14 01:34:48 [conn4] end connection 127.0.0.1:42878 (7 connections now open)
m30001| Thu Jun 14 01:34:48 [conn3] end connection 127.0.0.1:42876 (7 connections now open)
m30001| Thu Jun 14 01:34:48 [conn8] end connection 127.0.0.1:42893 (6 connections now open)
m30000| Thu Jun 14 01:34:48 [conn3] end connection 127.0.0.1:51459 (14 connections now open)
m30000| Thu Jun 14 01:34:48 [conn5] end connection 127.0.0.1:51463 (13 connections now open)
m30000| Thu Jun 14 01:34:48 [conn6] end connection 127.0.0.1:51464 (12 connections now open)
m30000| Thu Jun 14 01:34:48 [conn7] end connection 127.0.0.1:51468 (11 connections now open)
m30000| Thu Jun 14 01:34:48 [conn14] end connection 127.0.0.1:51484 (10 connections now open)
m30000| Thu Jun 14 01:34:48 [conn15] end connection 127.0.0.1:51485 (9 connections now open)
Thu Jun 14 01:34:49 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:49 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:49 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:49 dbexit:
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:49 [interruptThread] closing listening socket: 27
m30000| Thu Jun 14 01:34:49 [interruptThread] closing listening socket: 28
m30000| Thu Jun 14 01:34:49 [interruptThread] closing listening socket: 29
m30000| Thu Jun 14 01:34:49 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:49 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:34:49 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:34:49 dbexit: really exiting now
m30001| Thu Jun 14 01:34:49 [conn6] end connection 127.0.0.1:42886 (4 connections now open)
Thu Jun 14 01:34:50 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:50 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:50 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:50 dbexit:
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:50 [interruptThread] closing listening socket: 30
m30001| Thu Jun 14 01:34:50 [interruptThread] closing listening socket: 31
m30001| Thu Jun 14 01:34:50 [interruptThread] closing listening socket: 32
m30001| Thu Jun 14 01:34:50 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:34:50 [conn5] end connection 127.0.0.1:45618 (3 connections now open)
m30001| Thu Jun 14 01:34:50 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:50 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:50 dbexit: really exiting now
Thu Jun 14 01:34:51 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:34:51 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:34:51 [interruptThread] now exiting
m30002| Thu Jun 14 01:34:51 dbexit:
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:34:51 [interruptThread] closing listening socket: 33
m30002| Thu Jun 14 01:34:51 [interruptThread] closing listening socket: 34
m30002| Thu Jun 14 01:34:51 [interruptThread] closing listening socket: 35
m30002| Thu Jun 14 01:34:51 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:34:51 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:34:51 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:34:51 dbexit: really exiting now
Thu Jun 14 01:34:52 shell: stopped mongo program on port 30002
*** ShardingTest geo_near_random2 completed successfully in 18.578 seconds ***
18724.606991ms
Thu Jun 14 01:34:52 [initandlisten] connection accepted from 127.0.0.1:54990 #32 (19 connections now open)
*******************************************
Test : gle_with_conf_servers.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/gle_with_conf_servers.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/gle_with_conf_servers.js";TestData.testFile = "gle_with_conf_servers.js";TestData.testName = "gle_with_conf_servers";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:34:52 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:34:52 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:34:52
m30000| Thu Jun 14 01:34:52 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:34:52
m30000| Thu Jun 14 01:34:52 [initandlisten] MongoDB starting : pid=25044 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:34:52 [initandlisten]
m30000| Thu Jun 14 01:34:52 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:34:52 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:34:52 [initandlisten]
m30000| Thu Jun 14 01:34:52 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:34:52 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:34:52 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:34:52 [initandlisten]
m30000| Thu Jun 14 01:34:52 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:34:52 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:34:52 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:34:52 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:34:52 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:34:52 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:34:52 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:34:52 [initandlisten] connection accepted from 127.0.0.1:51489 #1 (1 connection now open)
m30001| Thu Jun 14 01:34:52
m30001| Thu Jun 14 01:34:52 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:34:52
m30001| Thu Jun 14 01:34:52 [initandlisten] MongoDB starting : pid=25056 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:34:52 [initandlisten]
m30001| Thu Jun 14 01:34:52 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:34:52 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:34:52 [initandlisten]
m30001| Thu Jun 14 01:34:52 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:34:52 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:34:52 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:34:52 [initandlisten]
m30001| Thu Jun 14 01:34:52 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:34:52 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:34:52 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:34:52 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:34:52 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:34:52 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:34:52 [initandlisten] connection accepted from 127.0.0.1:42898 #1 (1 connection now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:34:52 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:34:52 [initandlisten] connection accepted from 127.0.0.1:51492 #2 (2 connections now open)
m30000| Thu Jun 14 01:34:52 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:34:52 [FileAllocator] creating directory /data/db/test0/_tmp
m30000| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:51494 #3 (3 connections now open)
m30999| Thu Jun 14 01:34:52 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:34:52 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25071 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:34:52 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:34:52 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:34:52 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:34:53 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.239 secs
m30000| Thu Jun 14 01:34:53 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:34:53 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.273 secs
m30000| Thu Jun 14 01:34:53 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:34:53 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn2] insert config.settings keyUpdates:0 locks(micros) w:524526 524ms
m30000| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:51497 #4 (4 connections now open)
m30000| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:51498 #5 (5 connections now open)
m30000| Thu Jun 14 01:34:53 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:34:53 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:34:53 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:34:53 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:34:53 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:34:53 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:34:53
m30999| Thu Jun 14 01:34:53 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:34:53 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652093:1804289383' acquired, ts : 4fd977fddd2fd78832c800ea
m30999| Thu Jun 14 01:34:53 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652093:1804289383' unlocked.
m30999| Thu Jun 14 01:34:53 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652093:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:34:53 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:34:53 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:51499 #6 (6 connections now open)
m30000| Thu Jun 14 01:34:53 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:34:53 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:34:53 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:34:53 [mongosMain] connection accepted from 127.0.0.1:43479 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:34:53 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:34:53 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:34:53 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30000| Thu Jun 14 01:34:53 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:34:53 [conn3] build index done. scanned 0 total records. 0 secs
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:34:53 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30000| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:51502 #7 (7 connections now open)
m30999| Thu Jun 14 01:34:53 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd977fddd2fd78832c800e9
m30999| Thu Jun 14 01:34:53 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd977fddd2fd78832c800e9
{
"shards" : [
"localhost:30000",
"localhost:30001"
],
"shardRawGLE" : {
"localhost:30000" : {
"updatedExisting" : false,
"n" : 0,
"connectionId" : 7,
"wnote" : "no replication has been enabled, so w=2+ won't work",
"err" : "norepl",
"ok" : 1
},
"localhost:30001" : {
"n" : 0,
"connectionId" : 3,
"wnote" : "no replication has been enabled, so w=2+ won't work",
"err" : "norepl",
"ok" : 1
}
},
"n" : 0,
"updatedExisting" : false,
"err" : "norepl",
"errs" : [
"norepl",
"norepl"
],
"errObjects" : [
{
"updatedExisting" : false,
"n" : 0,
"connectionId" : 7,
"wnote" : "no replication has been enabled, so w=2+ won't work",
"err" : "norepl",
"ok" : 1
},
{
"n" : 0,
"connectionId" : 3,
"wnote" : "no replication has been enabled, so w=2+ won't work",
"err" : "norepl",
"ok" : 1
}
],
"ok" : 1
}
m30999| Thu Jun 14 01:34:53 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:34:53 [conn3] end connection 127.0.0.1:51494 (6 connections now open)
m30000| Thu Jun 14 01:34:53 [conn4] end connection 127.0.0.1:51497 (5 connections now open)
m30000| Thu Jun 14 01:34:53 [conn6] end connection 127.0.0.1:51499 (4 connections now open)
m30000| Thu Jun 14 01:34:53 [conn7] end connection 127.0.0.1:51502 (3 connections now open)
m30001| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:42908 #2 (2 connections now open)
m30001| Thu Jun 14 01:34:53 [initandlisten] connection accepted from 127.0.0.1:42910 #3 (3 connections now open)
m30001| Thu Jun 14 01:34:53 [conn3] end connection 127.0.0.1:42910 (2 connections now open)
m30000| Thu Jun 14 01:34:54 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.539 secs
Thu Jun 14 01:34:54 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:34:54 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:34:54 [interruptThread] now exiting
m30000| Thu Jun 14 01:34:54 dbexit:
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:34:54 [interruptThread] closing listening socket: 28
m30000| Thu Jun 14 01:34:54 [interruptThread] closing listening socket: 29
m30000| Thu Jun 14 01:34:54 [interruptThread] closing listening socket: 30
m30000| Thu Jun 14 01:34:54 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:34:54 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:34:54 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:34:54 dbexit: really exiting now
Thu Jun 14 01:34:55 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:34:55 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:34:55 [interruptThread] now exiting
m30001| Thu Jun 14 01:34:55 dbexit:
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:34:55 [interruptThread] closing listening socket: 31
m30001| Thu Jun 14 01:34:55 [interruptThread] closing listening socket: 32
m30001| Thu Jun 14 01:34:55 [interruptThread] closing listening socket: 33
m30001| Thu Jun 14 01:34:55 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:34:55 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:34:55 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:34:55 dbexit: really exiting now
Thu Jun 14 01:34:56 shell: stopped mongo program on port 30001
*** ShardingTest test completed successfully in 4.056 seconds ***
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 0,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-0'
Thu Jun 14 01:34:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:34:56
m31100| Thu Jun 14 01:34:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:34:56
m31100| Thu Jun 14 01:34:56 [initandlisten] MongoDB starting : pid=25103 port=31100 dbpath=/data/db/test-rs0-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:34:56 [initandlisten]
m31100| Thu Jun 14 01:34:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:34:56 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:34:56 [initandlisten]
m31100| Thu Jun 14 01:34:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:34:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:34:56 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:34:56 [initandlisten]
m31100| Thu Jun 14 01:34:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:34:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:34:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:34:56 [initandlisten] options: { dbpath: "/data/db/test-rs0-0", noprealloc: true, oplogSize: 10, port: 31100, replSet: "test-rs0", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:34:56 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:34:56 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:34:56 [initandlisten] connection accepted from 10.255.119.66:43590 #1 (1 connection now open)
m31100| Thu Jun 14 01:34:56 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:34:56 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 1,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-1'
m31100| Thu Jun 14 01:34:56 [initandlisten] connection accepted from 127.0.0.1:39366 #2 (2 connections now open)
Thu Jun 14 01:34:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:34:56
m31101| Thu Jun 14 01:34:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:34:56
m31101| Thu Jun 14 01:34:56 [initandlisten] MongoDB starting : pid=25119 port=31101 dbpath=/data/db/test-rs0-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:34:56 [initandlisten]
m31101| Thu Jun 14 01:34:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:34:56 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:34:56 [initandlisten]
m31101| Thu Jun 14 01:34:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:34:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:34:56 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:34:56 [initandlisten]
m31101| Thu Jun 14 01:34:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:34:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:34:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:34:56 [initandlisten] options: { dbpath: "/data/db/test-rs0-1", noprealloc: true, oplogSize: 10, port: 31101, replSet: "test-rs0", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:34:56 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:34:56 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:34:56 [initandlisten] connection accepted from 10.255.119.66:37968 #1 (1 connection now open)
m31101| Thu Jun 14 01:34:56 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:34:56 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 2,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-2'
m31101| Thu Jun 14 01:34:57 [initandlisten] connection accepted from 127.0.0.1:39715 #2 (2 connections now open)
Thu Jun 14 01:34:57 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31102 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:34:57
m31102| Thu Jun 14 01:34:57 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:34:57
m31102| Thu Jun 14 01:34:57 [initandlisten] MongoDB starting : pid=25135 port=31102 dbpath=/data/db/test-rs0-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:34:57 [initandlisten]
m31102| Thu Jun 14 01:34:57 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:34:57 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:34:57 [initandlisten]
m31102| Thu Jun 14 01:34:57 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:34:57 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:34:57 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:34:57 [initandlisten]
m31102| Thu Jun 14 01:34:57 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:34:57 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:34:57 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:34:57 [initandlisten] options: { dbpath: "/data/db/test-rs0-2", noprealloc: true, oplogSize: 10, port: 31102, replSet: "test-rs0", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:34:57 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:34:57 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:34:57 [initandlisten] connection accepted from 10.255.119.66:53374 #1 (1 connection now open)
m31102| Thu Jun 14 01:34:57 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:34:57 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Thu Jun 14 01:34:57 [initandlisten] connection accepted from 127.0.0.1:53161 #2 (2 connections now open)
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
{
"replSetInitiate" : {
"_id" : "test-rs0",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
}
m31100| Thu Jun 14 01:34:57 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:34:57 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:34:57 [initandlisten] connection accepted from 10.255.119.66:37973 #3 (3 connections now open)
m31102| Thu Jun 14 01:34:57 [initandlisten] connection accepted from 10.255.119.66:53377 #3 (3 connections now open)
m31100| Thu Jun 14 01:34:57 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:34:57 [conn2] ******
m31100| Thu Jun 14 01:34:57 [conn2] creating replication oplog of size: 10MB...
m31100| Thu Jun 14 01:34:57 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:34:57 [FileAllocator] creating directory /data/db/test-rs0-0/_tmp
m31100| Thu Jun 14 01:34:57 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.ns, size: 16MB, took 0.254 secs
m31100| Thu Jun 14 01:34:57 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:34:57 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.0, size: 16MB, took 0.254 secs
m31100| Thu Jun 14 01:34:57 [conn2] ******
m31100| Thu Jun 14 01:34:57 [conn2] replSet info saving a newer config version to local.system.replset
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31100| Thu Jun 14 01:34:57 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:34:57 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:34:57 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "test-rs0", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:543564 w:33 reslen:112 542ms
m31100| Thu Jun 14 01:35:06 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:06 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:35:06 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:35:06 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:35:06 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:35:06 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:06 [initandlisten] connection accepted from 10.255.119.66:43600 #3 (3 connections now open)
m31101| Thu Jun 14 01:35:06 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:35:06 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:35:06 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:35:06 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:35:06 [FileAllocator] creating directory /data/db/test-rs0-1/_tmp
m31102| Thu Jun 14 01:35:07 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:07 [initandlisten] connection accepted from 10.255.119.66:43601 #4 (4 connections now open)
m31102| Thu Jun 14 01:35:07 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:35:07 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:35:07 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:35:07 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:35:07 [FileAllocator] creating directory /data/db/test-rs0-2/_tmp
m31101| Thu Jun 14 01:35:07 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.ns, size: 16MB, took 0.227 secs
m31101| Thu Jun 14 01:35:07 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.0, filling with zeroes...
m31102| Thu Jun 14 01:35:07 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.ns, size: 16MB, took 0.554 secs
m31102| Thu Jun 14 01:35:07 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.0, filling with zeroes...
m31101| Thu Jun 14 01:35:07 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.0, size: 16MB, took 0.593 secs
m31102| Thu Jun 14 01:35:08 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.0, size: 16MB, took 0.321 secs
m31101| Thu Jun 14 01:35:08 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:35:08 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:35:08 [rsSync] ******
m31101| Thu Jun 14 01:35:08 [rsSync] creating replication oplog of size: 10MB...
m31102| Thu Jun 14 01:35:08 [rsStart] replSet saveConfigLocally done
m31102| Thu Jun 14 01:35:08 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:35:08 [rsSync] ******
m31101| Thu Jun 14 01:35:08 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:35:08 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Thu Jun 14 01:35:08 [rsSync] ******
m31102| Thu Jun 14 01:35:08 [rsSync] creating replication oplog of size: 10MB...
m31102| Thu Jun 14 01:35:08 [rsSync] ******
m31102| Thu Jun 14 01:35:08 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:35:08 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:35:08 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31100| Thu Jun 14 01:35:08 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31101| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:35:08 [initandlisten] connection accepted from 10.255.119.66:53380 #4 (4 connections now open)
m31101| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:35:08 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:35:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:35:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:35:09 [initandlisten] connection accepted from 10.255.119.66:37978 #4 (4 connections now open)
m31102| Thu Jun 14 01:35:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:35:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:35:14 [rsMgr] replSet info electSelf 0
m31102| Thu Jun 14 01:35:14 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:35:14 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31101| Thu Jun 14 01:35:14 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:35:14 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:35:14 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:35:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:35:14 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31102| Thu Jun 14 01:35:15 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:35:15 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:35:15 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:35:16 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.ns, size: 16MB, took 0.228 secs
m31100| Thu Jun 14 01:35:16 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.0, filling with zeroes...
m31100| Thu Jun 14 01:35:16 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.0, size: 16MB, took 0.251 secs
m31100| Thu Jun 14 01:35:16 [conn2] build index admin.foo { _id: 1 }
m31100| Thu Jun 14 01:35:16 [conn2] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:16 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:543564 w:490309 489ms
ReplSetTest Timestamp(1339652116000, 1)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:35:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31100| Thu Jun 14 01:35:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31101| Thu Jun 14 01:35:20 [conn3] end connection 10.255.119.66:37973 (3 connections now open)
m31101| Thu Jun 14 01:35:20 [initandlisten] connection accepted from 10.255.119.66:37979 #5 (4 connections now open)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:35:22 [conn3] end connection 10.255.119.66:43600 (3 connections now open)
m31100| Thu Jun 14 01:35:22 [initandlisten] connection accepted from 10.255.119.66:43605 #5 (4 connections now open)
m31100| Thu Jun 14 01:35:23 [conn4] end connection 10.255.119.66:43601 (3 connections now open)
m31100| Thu Jun 14 01:35:23 [initandlisten] connection accepted from 10.255.119.66:43606 #6 (4 connections now open)
m31101| Thu Jun 14 01:35:24 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:35:24 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:24 [initandlisten] connection accepted from 10.255.119.66:43607 #7 (5 connections now open)
m31101| Thu Jun 14 01:35:24 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:35:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:24 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:35:24 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:35:24 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:35:24 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:35:24 [initandlisten] connection accepted from 10.255.119.66:43608 #8 (6 connections now open)
m31102| Thu Jun 14 01:35:24 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:35:24 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:35:24 [rsSync] build index local.me { _id: 1 }
m31102| Thu Jun 14 01:35:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:35:24 [rsSync] replSet initial sync drop all databases
m31102| Thu Jun 14 01:35:24 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Thu Jun 14 01:35:24 [rsSync] replSet initial sync clone all databases
m31102| Thu Jun 14 01:35:24 [rsSync] replSet initial sync cloning db: admin
m31102| Thu Jun 14 01:35:24 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:35:24 [initandlisten] connection accepted from 10.255.119.66:43609 #9 (7 connections now open)
m31100| Thu Jun 14 01:35:24 [initandlisten] connection accepted from 10.255.119.66:43610 #10 (8 connections now open)
m31101| Thu Jun 14 01:35:24 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:35:24 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.ns, size: 16MB, took 0.564 secs
m31101| Thu Jun 14 01:35:24 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:35:24 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.ns, size: 16MB, took 0.552 secs
m31102| Thu Jun 14 01:35:24 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.0, filling with zeroes...
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
m31102| Thu Jun 14 01:35:25 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.0, size: 16MB, took 0.616 secs
m31102| Thu Jun 14 01:35:25 [rsSync] build index admin.foo { _id: 1 }
m31102| Thu Jun 14 01:35:25 [rsSync] fastBuildIndex dupsToDrop:0
m31102| Thu Jun 14 01:35:25 [rsSync] build index done. scanned 1 total records. 0 secs
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync data copy, starting syncup
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync building indexes
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync cloning indexes for : admin
m31101| Thu Jun 14 01:35:25 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.0, size: 16MB, took 0.616 secs
m31101| Thu Jun 14 01:35:25 [rsSync] build index admin.foo { _id: 1 }
m31101| Thu Jun 14 01:35:25 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:35:25 [rsSync] build index done. scanned 1 total records. 0 secs
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync data copy, starting syncup
m31100| Thu Jun 14 01:35:25 [conn10] end connection 10.255.119.66:43610 (7 connections now open)
m31100| Thu Jun 14 01:35:25 [conn8] end connection 10.255.119.66:43608 (6 connections now open)
m31100| Thu Jun 14 01:35:25 [initandlisten] connection accepted from 10.255.119.66:43611 #11 (8 connections now open)
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync cloning indexes for : admin
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:35:25 [initandlisten] connection accepted from 10.255.119.66:43612 #12 (8 connections now open)
m31100| Thu Jun 14 01:35:25 [conn11] end connection 10.255.119.66:43611 (7 connections now open)
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync query minValid
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync finishing up
m31102| Thu Jun 14 01:35:25 [rsSync] replSet set minValid=4fd97814:1
m31102| Thu Jun 14 01:35:25 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Thu Jun 14 01:35:25 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync finishing up
m31100| Thu Jun 14 01:35:25 [conn12] end connection 10.255.119.66:43612 (6 connections now open)
m31102| Thu Jun 14 01:35:25 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:35:25 [conn9] end connection 10.255.119.66:43609 (5 connections now open)
m31101| Thu Jun 14 01:35:25 [rsSync] replSet set minValid=4fd97814:1
m31101| Thu Jun 14 01:35:25 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:35:25 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:25 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:35:25 [conn7] end connection 10.255.119.66:43607 (4 connections now open)
m31101| Thu Jun 14 01:35:26 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:26 [initandlisten] connection accepted from 10.255.119.66:43613 #13 (5 connections now open)
m31102| Thu Jun 14 01:35:26 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:26 [initandlisten] connection accepted from 10.255.119.66:43614 #14 (6 connections now open)
m31102| Thu Jun 14 01:35:26 [rsSync] replSet SECONDARY
m31102| Thu Jun 14 01:35:26 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:26 [initandlisten] connection accepted from 10.255.119.66:43615 #15 (7 connections now open)
m31101| Thu Jun 14 01:35:26 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:35:26 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:26 [initandlisten] connection accepted from 10.255.119.66:43616 #16 (8 connections now open)
m31100| Thu Jun 14 01:35:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31100| Thu Jun 14 01:35:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31101| Thu Jun 14 01:35:26 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31102| Thu Jun 14 01:35:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
{
"ts" : Timestamp(1339652116000, 1),
"h" : NumberLong("5580234077484613649"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd97813eec35d80ffdfc91f"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652116000:1 and latest is 1339652116000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
{
"ts" : Timestamp(1339652116000, 1),
"h" : NumberLong("5580234077484613649"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd97813eec35d80ffdfc91f"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31102 is 1339652116000:1 and latest is 1339652116000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31102 is 1
ReplSetTest await synced=true
Thu Jun 14 01:35:27 starting new replica set monitor for replica set test-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:35:27 successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set test-rs0
m31100| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:43617 #17 (9 connections now open)
Thu Jun 14 01:35:27 changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/
Thu Jun 14 01:35:27 trying to add new host domU-12-31-39-01-70-B4:31100 to replica set test-rs0
Thu Jun 14 01:35:27 successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set test-rs0
m31100| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:43618 #18 (10 connections now open)
Thu Jun 14 01:35:27 trying to add new host domU-12-31-39-01-70-B4:31101 to replica set test-rs0
m31101| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:37994 #6 (5 connections now open)
Thu Jun 14 01:35:27 successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set test-rs0
Thu Jun 14 01:35:27 trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m31102| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:53398 #5 (5 connections now open)
Thu Jun 14 01:35:27 successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m31100| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:43621 #19 (11 connections now open)
m31100| Thu Jun 14 01:35:27 [conn17] end connection 10.255.119.66:43617 (10 connections now open)
Thu Jun 14 01:35:27 Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:27 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Thu Jun 14 01:35:27 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:37997 #7 (6 connections now open)
m31102| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:53401 #6 (6 connections now open)
Thu Jun 14 01:35:27 replica set monitor for replica set test-rs0 started, address is test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:35:27 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:43624 #20 (11 connections now open)
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:35:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m29000| Thu Jun 14 01:35:27
m29000| Thu Jun 14 01:35:27 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:35:27
m29000| Thu Jun 14 01:35:27 [initandlisten] MongoDB starting : pid=25237 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:35:27 [initandlisten]
m29000| Thu Jun 14 01:35:27 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:35:27 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:35:27 [initandlisten]
m29000| Thu Jun 14 01:35:27 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:35:27 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:35:27 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:35:27 [initandlisten]
m29000| Thu Jun 14 01:35:27 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:35:27 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:35:27 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:35:27 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:35:27 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:35:27 [websvr] admin web console waiting for connections on port 30000
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 127.0.0.1:38318 #1 (1 connection now open)
ShardingTest test :
{
"config" : "domU-12-31-39-01-70-B4:29000",
"shards" : [
connection to test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
]
}
Thu Jun 14 01:35:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:42288 #2 (2 connections now open)
m29000| Thu Jun 14 01:35:27 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:35:27 [FileAllocator] creating directory /data/db/test-config0/_tmp
m30999| Thu Jun 14 01:35:27 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:35:27 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25250 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:35:27 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:35:27 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:35:27 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30999 }
m29000| Thu Jun 14 01:35:27 [initandlisten] connection accepted from 10.255.119.66:42290 #3 (3 connections now open)
m29000| Thu Jun 14 01:35:27 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.347 secs
m29000| Thu Jun 14 01:35:27 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:35:28 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.302 secs
m30999| Thu Jun 14 01:35:28 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:35:28 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:35:28 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:35:28 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:35:28 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:35:28
m30999| Thu Jun 14 01:35:28 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:35:28 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339652128:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:35:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652128:1804289383' acquired, ts : 4fd97820bf3df24ee0f4ce54
m30999| Thu Jun 14 01:35:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652128:1804289383' unlocked.
m29000| Thu Jun 14 01:35:28 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:35:28 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn2] insert config.settings keyUpdates:0 locks(micros) w:663205 663ms
m29000| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:42294 #4 (4 connections now open)
m29000| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:42295 #5 (5 connections now open)
m29000| Thu Jun 14 01:35:28 [conn5] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:35:28 [conn4] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:35:28 [conn4] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:42296 #6 (6 connections now open)
m29000| Thu Jun 14 01:35:28 [conn5] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0.003 secs
m29000| Thu Jun 14 01:35:28 [conn4] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:35:28 [mongosMain] connection accepted from 127.0.0.1:43530 #1 (1 connection now open)
ShardingTest undefined going to add shard : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:35:28 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:35:28 [conn4] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:35:28 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:35:28 [conn] starting new replica set monitor for replica set test-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31100| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:43637 #21 (12 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set test-rs0
m30999| Thu Jun 14 01:35:28 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/
m30999| Thu Jun 14 01:35:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set test-rs0
m30999| Thu Jun 14 01:35:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set test-rs0
m30999| Thu Jun 14 01:35:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set test-rs0
m31100| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:43638 #22 (13 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set test-rs0
m30999| Thu Jun 14 01:35:28 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m31101| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:38014 #8 (7 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m31102| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:53418 #7 (7 connections now open)
m31100| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:43641 #23 (14 connections now open)
m31100| Thu Jun 14 01:35:28 [conn21] end connection 10.255.119.66:43637 (13 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:38017 #9 (8 connections now open)
m31102| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:53421 #8 (8 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] replica set monitor for replica set test-rs0 started, address is test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:35:28 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:43644 #24 (14 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] going to add shard: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }
{ "shardAdded" : "test-rs0", "ok" : 1 }
m30999| Thu Jun 14 01:35:28 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:35:28 [conn] put [test] on: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:35:28 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:35:28 [conn] CMD: shardcollection: { shardCollection: "test.user", key: { x: 1.0 } }
m30999| Thu Jun 14 01:35:28 [conn] enable sharding on: test.user with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:35:28 [conn] going to create 1 chunk(s) for: test.user using new epoch 4fd97820bf3df24ee0f4ce55
m31100| Thu Jun 14 01:35:28 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:35:28 [conn] ChunkManager: time to load chunks for test.user: 0ms sequenceNumber: 2 version: 1|0||4fd97820bf3df24ee0f4ce55 based on: (empty)
m29000| Thu Jun 14 01:35:28 [conn4] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:35:28 [conn4] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:28 [initandlisten] connection accepted from 10.255.119.66:43645 #25 (15 connections now open)
m30999| Thu Jun 14 01:35:28 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd97820bf3df24ee0f4ce53
m30999| Thu Jun 14 01:35:28 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd97820bf3df24ee0f4ce53
m30999| Thu Jun 14 01:35:28 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31102 serverID: 4fd97820bf3df24ee0f4ce53
m29000| Thu Jun 14 01:35:29 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.954 secs
m31100| Thu Jun 14 01:35:29 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.ns, size: 16MB, took 0.873 secs
m31100| Thu Jun 14 01:35:29 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.0, filling with zeroes...
m31100| Thu Jun 14 01:35:29 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.0, size: 16MB, took 0.38 secs
m31100| Thu Jun 14 01:35:29 [conn24] build index test.user { _id: 1 }
m31100| Thu Jun 14 01:35:29 [conn24] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:29 [conn24] info: creating collection test.user on add index
m31100| Thu Jun 14 01:35:29 [conn24] build index test.user { x: 1.0 }
m31100| Thu Jun 14 01:35:29 [conn24] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:29 [conn24] insert test.system.indexes keyUpdates:0 locks(micros) R:5 W:82 r:182 w:1264478 1264ms
m31100| Thu Jun 14 01:35:29 [conn25] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd97820bf3df24ee0f4ce53'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:688 reslen:51 1262ms
m31100| Thu Jun 14 01:35:29 [conn16] getmore local.oplog.rs query: { ts: { $gte: new Date(5753762026237198337) } } cursorid:2004547986414974560 ntoreturn:0 keyUpdates:0 locks(micros) r:318 nreturned:1 reslen:37 3303ms
m31100| Thu Jun 14 01:35:29 [conn25] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:35:29 [initandlisten] connection accepted from 10.255.119.66:42307 #7 (7 connections now open)
m30999| Thu Jun 14 01:35:29 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd97820bf3df24ee0f4ce53
m31100| Thu Jun 14 01:35:29 [conn15] getmore local.oplog.rs query: { ts: { $gte: new Date(5753762026237198337) } } cursorid:3475005477006161501 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:378 nreturned:1 reslen:37 3311ms
m31100| Thu Jun 14 01:35:29 [conn14] getmore local.oplog.rs query: { ts: { $gte: new Date(5753762026237198337) } } cursorid:7750845114678588820 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:289 nreturned:1 reslen:147 3511ms
m31100| Thu Jun 14 01:35:29 [conn13] getmore local.oplog.rs query: { ts: { $gte: new Date(5753762026237198337) } } cursorid:824787248355292557 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:510 nreturned:1 reslen:147 3519ms
m29000| Thu Jun 14 01:35:29 [initandlisten] connection accepted from 10.255.119.66:42308 #8 (8 connections now open)
m31101| Thu Jun 14 01:35:30 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.ns, filling with zeroes...
m31102| Thu Jun 14 01:35:30 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.ns, filling with zeroes...
m31101| Thu Jun 14 01:35:30 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.ns, size: 16MB, took 0.567 secs
m31101| Thu Jun 14 01:35:30 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.0, filling with zeroes...
m31102| Thu Jun 14 01:35:30 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.ns, size: 16MB, took 0.539 secs
m31102| Thu Jun 14 01:35:30 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.0, filling with zeroes...
m31102| Thu Jun 14 01:35:31 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.0, size: 16MB, took 0.361 secs
m31102| Thu Jun 14 01:35:31 [rsSync] build index test.user { _id: 1 }
m31102| Thu Jun 14 01:35:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:35:31 [rsSync] info: creating collection test.user on add index
m31102| Thu Jun 14 01:35:31 [rsSync] build index test.user { x: 1.0 }
m31102| Thu Jun 14 01:35:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:31 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.0, size: 16MB, took 0.551 secs
m31101| Thu Jun 14 01:35:31 [rsSync] build index test.user { _id: 1 }
m31101| Thu Jun 14 01:35:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:35:31 [rsSync] info: creating collection test.user on add index
m31101| Thu Jun 14 01:35:31 [rsSync] build index test.user { x: 1.0 }
m31101| Thu Jun 14 01:35:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:32 [conn25] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:702 w:151 reslen:94 2645ms
m31100| Thu Jun 14 01:35:32 [conn24] request split points lookup for chunk test.user { : MinKey } -->> { : MaxKey }
m31100| Thu Jun 14 01:35:32 [conn24] max number of requested split points reached (2) before the end of chunk test.user { : MinKey } -->> { : MaxKey }
m29000| Thu Jun 14 01:35:32 [initandlisten] connection accepted from 10.255.119.66:42309 #9 (9 connections now open)
m31100| Thu Jun 14 01:35:32 [conn24] received splitChunk request: { splitChunk: "test.user", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "test-rs0", splitKeys: [ { x: 0.0 } ], shardId: "test.user-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000" }
m31100| Thu Jun 14 01:35:32 [conn24] created new distributed lock for test.user on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Thu Jun 14 01:35:32 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:31100:1339652132:697461879 (sleeping for 30000ms)
m31100| Thu Jun 14 01:35:32 [conn24] distributed lock 'test.user/domU-12-31-39-01-70-B4:31100:1339652132:697461879' acquired, ts : 4fd978244f922b4d7f377c8f
m31100| Thu Jun 14 01:35:32 [conn24] splitChunk accepted at version 1|0||4fd97820bf3df24ee0f4ce55
m31100| Thu Jun 14 01:35:32 [conn24] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:35:32-0", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:43644", time: new Date(1339652132387), what: "split", ns: "test.user", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97820bf3df24ee0f4ce55') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97820bf3df24ee0f4ce55') } } }
m31100| Thu Jun 14 01:35:32 [conn24] distributed lock 'test.user/domU-12-31-39-01-70-B4:31100:1339652132:697461879' unlocked.
m30999| Thu Jun 14 01:35:32 [conn] ChunkManager: time to load chunks for test.user: 0ms sequenceNumber: 3 version: 1|2||4fd97820bf3df24ee0f4ce55 based on: 1|0||4fd97820bf3df24ee0f4ce55
m30999| Thu Jun 14 01:35:32 [conn] autosplitted test.user shard: ns:test.user at: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:35:32 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:35:32 [conn3] end connection 10.255.119.66:42290 (8 connections now open)
m29000| Thu Jun 14 01:35:32 [conn4] end connection 10.255.119.66:42294 (7 connections now open)
m29000| Thu Jun 14 01:35:32 [conn6] end connection 10.255.119.66:42296 (6 connections now open)
m31100| Thu Jun 14 01:35:32 [conn22] end connection 10.255.119.66:43638 (14 connections now open)
m31101| Thu Jun 14 01:35:32 [conn8] end connection 10.255.119.66:38014 (7 connections now open)
m31102| Thu Jun 14 01:35:32 [conn7] end connection 10.255.119.66:53418 (7 connections now open)
m31100| Thu Jun 14 01:35:32 [conn24] end connection 10.255.119.66:43644 (13 connections now open)
m29000| Thu Jun 14 01:35:32 [conn8] end connection 10.255.119.66:42308 (5 connections now open)
m31100| Thu Jun 14 01:35:32 [conn25] end connection 10.255.119.66:43645 (12 connections now open)
m29000| Thu Jun 14 01:35:32 [conn5] end connection 10.255.119.66:42295 (4 connections now open)
Thu Jun 14 01:35:33 shell: stopped mongo program on port 30999
Thu Jun 14 01:35:33 No db started on port: 30000
Thu Jun 14 01:35:33 shell: stopped mongo program on port 30000
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:35:33 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:35:33 [interruptThread] now exiting
m31100| Thu Jun 14 01:35:33 dbexit:
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:35:33 [interruptThread] closing listening socket: 31
m31100| Thu Jun 14 01:35:33 [interruptThread] closing listening socket: 32
m31100| Thu Jun 14 01:35:33 [interruptThread] closing listening socket: 35
m31100| Thu Jun 14 01:35:33 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:35:33 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:35:33 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:35:33 [conn1] end connection 10.255.119.66:43590 (11 connections now open)
m31100| Thu Jun 14 01:35:33 dbexit: really exiting now
m31102| Thu Jun 14 01:35:33 [conn3] end connection 10.255.119.66:53377 (6 connections now open)
m31102| Thu Jun 14 01:35:33 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:35:33 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:35:33 [conn5] end connection 10.255.119.66:37979 (6 connections now open)
m31101| Thu Jun 14 01:35:33 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m29000| Thu Jun 14 01:35:33 [conn7] end connection 10.255.119.66:42307 (3 connections now open)
m29000| Thu Jun 14 01:35:33 [conn9] end connection 10.255.119.66:42309 (2 connections now open)
m31102| Thu Jun 14 01:35:34 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:35:34 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:35:34 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:35:34 [interruptThread] now exiting
m31101| Thu Jun 14 01:35:34 dbexit:
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:35:34 [interruptThread] closing listening socket: 36
m31101| Thu Jun 14 01:35:34 [interruptThread] closing listening socket: 37
m31101| Thu Jun 14 01:35:34 [interruptThread] closing listening socket: 38
m31101| Thu Jun 14 01:35:34 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:35:34 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:35:34 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:35:34 dbexit: really exiting now
m31102| Thu Jun 14 01:35:34 [conn4] end connection 10.255.119.66:53380 (5 connections now open)
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31101 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31101 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state DOWN
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:35:35 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31102| Thu Jun 14 01:35:35 [rsMgr] replSet can't see a majority, will not try to elect self
m31102| Thu Jun 14 01:35:35 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:35:35 [rsSyncNotifier] Socket flush send() errno:32 Broken pipe 10.255.119.66:31100
m31102| Thu Jun 14 01:35:35 [rsSyncNotifier] caught exception (socket exception) in destructor (~PiggyBackData)
m31102| Thu Jun 14 01:35:35 [rsSyncNotifier] repl: couldn't connect to server domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:35:35 shell: stopped mongo program on port 31101
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
ReplSetTest stop *** Shutting down mongod in port 31102 ***
m31102| Thu Jun 14 01:35:35 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:35:35 [interruptThread] now exiting
m31102| Thu Jun 14 01:35:35 dbexit:
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:35:35 [interruptThread] closing listening socket: 39
m31102| Thu Jun 14 01:35:35 [interruptThread] closing listening socket: 40
m31102| Thu Jun 14 01:35:35 [interruptThread] closing listening socket: 41
m31102| Thu Jun 14 01:35:35 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:35:35 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:35:35 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:35:35 dbexit: really exiting now
Thu Jun 14 01:35:36 shell: stopped mongo program on port 31102
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:35:36 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:35:36 [interruptThread] now exiting
m29000| Thu Jun 14 01:35:36 dbexit:
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:35:36 [interruptThread] closing listening socket: 48
m29000| Thu Jun 14 01:35:36 [interruptThread] closing listening socket: 49
m29000| Thu Jun 14 01:35:36 [interruptThread] closing listening socket: 50
m29000| Thu Jun 14 01:35:36 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:35:36 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:35:36 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:35:36 dbexit: really exiting now
Thu Jun 14 01:35:37 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:35:37 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:35:37 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 failed couldn't connect to server domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:35:37 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:35:37 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:35:37 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 40.85 seconds ***
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 0,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-0'
Thu Jun 14 01:35:37 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:35:37
m31100| Thu Jun 14 01:35:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:35:37
m31100| Thu Jun 14 01:35:37 [initandlisten] MongoDB starting : pid=25310 port=31100 dbpath=/data/db/test-rs0-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:35:37 [initandlisten]
m31100| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:35:37 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:35:37 [initandlisten]
m31100| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:35:37 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:35:37 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:35:37 [initandlisten]
m31100| Thu Jun 14 01:35:37 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:35:37 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:35:37 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:35:37 [initandlisten] options: { dbpath: "/data/db/test-rs0-0", noprealloc: true, oplogSize: 10, port: 31100, replSet: "test-rs0", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:35:37 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:35:37 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:35:37 [initandlisten] connection accepted from 10.255.119.66:43652 #1 (1 connection now open)
m31100| Thu Jun 14 01:35:37 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:35:37 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Thu Jun 14 01:35:37 [initandlisten] connection accepted from 127.0.0.1:39428 #2 (2 connections now open)
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 1,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-1'
Thu Jun 14 01:35:37 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:35:37
m31101| Thu Jun 14 01:35:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:35:37
m31101| Thu Jun 14 01:35:37 [initandlisten] MongoDB starting : pid=25326 port=31101 dbpath=/data/db/test-rs0-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:35:37 [initandlisten]
m31101| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:35:37 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:35:37 [initandlisten]
m31101| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:35:37 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:35:37 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:35:37 [initandlisten]
m31101| Thu Jun 14 01:35:37 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:35:37 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:35:37 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:35:37 [initandlisten] options: { dbpath: "/data/db/test-rs0-1", noprealloc: true, oplogSize: 10, port: 31101, replSet: "test-rs0", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:35:37 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:35:37 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:35:37 [initandlisten] connection accepted from 10.255.119.66:38030 #1 (1 connection now open)
m31101| Thu Jun 14 01:35:37 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:35:37 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Thu Jun 14 01:35:37 [initandlisten] connection accepted from 127.0.0.1:39777 #2 (2 connections now open)
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 10,
"keyFile" : undefined,
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 2,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-2'
Thu Jun 14 01:35:37 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 10 --port 31102 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:35:37
m31102| Thu Jun 14 01:35:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:35:37
m31102| Thu Jun 14 01:35:37 [initandlisten] MongoDB starting : pid=25342 port=31102 dbpath=/data/db/test-rs0-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:35:37 [initandlisten]
m31102| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:35:37 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:35:37 [initandlisten]
m31102| Thu Jun 14 01:35:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:35:37 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:35:37 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:35:37 [initandlisten]
m31102| Thu Jun 14 01:35:37 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:35:37 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:35:37 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:35:37 [initandlisten] options: { dbpath: "/data/db/test-rs0-2", noprealloc: true, oplogSize: 10, port: 31102, replSet: "test-rs0", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:35:37 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:35:37 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:35:37 [initandlisten] connection accepted from 10.255.119.66:53436 #1 (1 connection now open)
m31102| Thu Jun 14 01:35:37 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:35:37 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Thu Jun 14 01:35:38 [initandlisten] connection accepted from 127.0.0.1:53223 #2 (2 connections now open)
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
{
"replSetInitiate" : {
"_id" : "test-rs0",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
}
m31100| Thu Jun 14 01:35:38 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:35:38 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:35:38 [initandlisten] connection accepted from 10.255.119.66:38035 #3 (3 connections now open)
m31102| Thu Jun 14 01:35:38 [initandlisten] connection accepted from 10.255.119.66:53439 #3 (3 connections now open)
m31100| Thu Jun 14 01:35:38 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:35:38 [conn2] ******
m31100| Thu Jun 14 01:35:38 [conn2] creating replication oplog of size: 10MB...
m31100| Thu Jun 14 01:35:38 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:35:38 [FileAllocator] creating directory /data/db/test-rs0-0/_tmp
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 socket exception
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31101 ok
m31101| Thu Jun 14 01:35:38 [initandlisten] connection accepted from 10.255.119.66:38037 #4 (4 connections now open)
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] warning: node: domU-12-31-39-01-70-B4:31101 isn't a part of set: test-rs0 ismaster: { ismaster: false, secondary: false, info: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", isreplicaset: true, maxBsonObjectSize: 16777216, localTime: new Date(1339652138314), ok: 1.0 }
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31102 ok
m31102| Thu Jun 14 01:35:38 [initandlisten] connection accepted from 10.255.119.66:53441 #4 (4 connections now open)
Thu Jun 14 01:35:38 [ReplicaSetMonitorWatcher] warning: node: domU-12-31-39-01-70-B4:31102 isn't a part of set: test-rs0 ismaster: { ismaster: false, secondary: false, info: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", isreplicaset: true, maxBsonObjectSize: 16777216, localTime: new Date(1339652138315), ok: 1.0 }
m31100| Thu Jun 14 01:35:38 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.ns, size: 16MB, took 0.248 secs
m31100| Thu Jun 14 01:35:38 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:35:38 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.0, size: 16MB, took 0.313 secs
m31100| Thu Jun 14 01:35:38 [conn2] ******
m31100| Thu Jun 14 01:35:38 [conn2] replSet info saving a newer config version to local.system.replset
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31100| Thu Jun 14 01:35:38 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:35:38 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:35:38 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "test-rs0", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:594326 w:34 reslen:112 592ms
Thu Jun 14 01:35:39 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0
Thu Jun 14 01:35:39 [ReplicaSetMonitorWatcher] All nodes for set test-rs0 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
m31100| Thu Jun 14 01:35:47 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:47 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:35:47 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:35:47 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:35:47 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:35:47 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:47 [initandlisten] connection accepted from 10.255.119.66:43664 #3 (3 connections now open)
m31101| Thu Jun 14 01:35:47 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:35:47 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:35:47 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:35:47 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:35:47 [FileAllocator] creating directory /data/db/test-rs0-1/_tmp
m31102| Thu Jun 14 01:35:47 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:35:47 [initandlisten] connection accepted from 10.255.119.66:43665 #4 (4 connections now open)
m31102| Thu Jun 14 01:35:47 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:35:47 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:35:47 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:35:47 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:35:47 [FileAllocator] creating directory /data/db/test-rs0-2/_tmp
m31101| Thu Jun 14 01:35:47 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.ns, size: 16MB, took 0.216 secs
m31101| Thu Jun 14 01:35:47 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.0, filling with zeroes...
m31102| Thu Jun 14 01:35:48 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.ns, size: 16MB, took 0.473 secs
m31102| Thu Jun 14 01:35:48 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.0, filling with zeroes...
m31101| Thu Jun 14 01:35:48 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.0, size: 16MB, took 0.557 secs
m31102| Thu Jun 14 01:35:48 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.0, size: 16MB, took 0.361 secs
m31101| Thu Jun 14 01:35:48 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:35:48 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:35:48 [rsSync] ******
m31101| Thu Jun 14 01:35:48 [rsSync] creating replication oplog of size: 10MB...
m31102| Thu Jun 14 01:35:48 [rsStart] replSet saveConfigLocally done
m31102| Thu Jun 14 01:35:48 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:35:48 [rsSync] ******
m31101| Thu Jun 14 01:35:48 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:35:48 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Thu Jun 14 01:35:48 [rsSync] ******
m31102| Thu Jun 14 01:35:48 [rsSync] creating replication oplog of size: 10MB...
m31102| Thu Jun 14 01:35:48 [rsSync] ******
m31102| Thu Jun 14 01:35:48 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:35:48 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
Thu Jun 14 01:35:49 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:35:49 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 ok
m31100| Thu Jun 14 01:35:49 [initandlisten] connection accepted from 10.255.119.66:60580 #5 (5 connections now open)
Thu Jun 14 01:35:49 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:35:49 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
Thu Jun 14 01:35:49 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31100| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:35:49 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31100| Thu Jun 14 01:35:49 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31101| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:35:49 [initandlisten] connection accepted from 10.255.119.66:45877 #5 (5 connections now open)
m31101| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:35:49 [initandlisten] connection accepted from 10.255.119.66:44950 #5 (5 connections now open)
m31102| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:35:49 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
Thu Jun 14 01:35:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652150321), ok: 1.0 }
m31100| Thu Jun 14 01:35:50 [initandlisten] connection accepted from 10.255.119.66:60583 #6 (6 connections now open)
Thu Jun 14 01:35:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "test-rs0", ismaster: false, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339652150322), ok: 1.0 }
m31101| Thu Jun 14 01:35:50 [initandlisten] connection accepted from 10.255.119.66:44952 #6 (6 connections now open)
Thu Jun 14 01:35:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31102 { setName: "test-rs0", ismaster: false, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], me: "domU-12-31-39-01-70-B4:31102", maxBsonObjectSize: 16777216, localTime: new Date(1339652150323), ok: 1.0 }
m31102| Thu Jun 14 01:35:50 [initandlisten] connection accepted from 10.255.119.66:45881 #6 (6 connections now open)
Thu Jun 14 01:35:51 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0
m31100| Thu Jun 14 01:35:55 [rsMgr] replSet info electSelf 0
m31102| Thu Jun 14 01:35:55 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:35:55 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31101| Thu Jun 14 01:35:55 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:35:55 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:35:55 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:35:55 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31101| Thu Jun 14 01:35:55 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:35:55 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31102| Thu Jun 14 01:35:55 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:35:56 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:35:56 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.ns, size: 16MB, took 0.259 secs
m31100| Thu Jun 14 01:35:57 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.0, filling with zeroes...
m31100| Thu Jun 14 01:35:57 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.0, size: 16MB, took 0.281 secs
m31100| Thu Jun 14 01:35:57 [conn2] build index admin.foo { _id: 1 }
m31100| Thu Jun 14 01:35:57 [conn2] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:35:57 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:594326 w:550139 549ms
ReplSetTest Timestamp(1339652157000, 1)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:35:57 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31100| Thu Jun 14 01:35:57 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
Thu Jun 14 01:36:01 [ReplicaSetMonitorWatcher] Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:36:01 [conn3] end connection 10.255.119.66:38035 (5 connections now open)
m31101| Thu Jun 14 01:36:01 [initandlisten] connection accepted from 10.255.119.66:44954 #7 (6 connections now open)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:36:03 [conn3] end connection 10.255.119.66:43664 (5 connections now open)
m31100| Thu Jun 14 01:36:03 [initandlisten] connection accepted from 10.255.119.66:60587 #7 (6 connections now open)
m31100| Thu Jun 14 01:36:03 [conn4] end connection 10.255.119.66:43665 (5 connections now open)
m31100| Thu Jun 14 01:36:03 [initandlisten] connection accepted from 10.255.119.66:60588 #8 (6 connections now open)
m31101| Thu Jun 14 01:36:04 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:36:04 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:04 [initandlisten] connection accepted from 10.255.119.66:60589 #9 (7 connections now open)
m31101| Thu Jun 14 01:36:04 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:36:04 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:36:04 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:36:04 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:36:04 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:36:04 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:36:04 [initandlisten] connection accepted from 10.255.119.66:60590 #10 (8 connections now open)
m31102| Thu Jun 14 01:36:04 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:36:04 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:04 [initandlisten] connection accepted from 10.255.119.66:60591 #11 (9 connections now open)
m31102| Thu Jun 14 01:36:04 [rsSync] build index local.me { _id: 1 }
m31102| Thu Jun 14 01:36:04 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:36:04 [rsSync] replSet initial sync drop all databases
m31102| Thu Jun 14 01:36:04 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Thu Jun 14 01:36:04 [rsSync] replSet initial sync clone all databases
m31102| Thu Jun 14 01:36:04 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:36:04 [initandlisten] connection accepted from 10.255.119.66:60592 #12 (10 connections now open)
m31101| Thu Jun 14 01:36:04 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.ns, filling with zeroes...
m31102| Thu Jun 14 01:36:04 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:36:05 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.ns, size: 16MB, took 0.543 secs
m31102| Thu Jun 14 01:36:05 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.ns, size: 16MB, took 0.531 secs
m31101| Thu Jun 14 01:36:05 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:36:05 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:36:05 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.0, size: 16MB, took 0.515 secs
m31102| Thu Jun 14 01:36:05 [rsSync] build index admin.foo { _id: 1 }
m31102| Thu Jun 14 01:36:05 [rsSync] fastBuildIndex dupsToDrop:0
m31102| Thu Jun 14 01:36:05 [rsSync] build index done. scanned 1 total records. 0 secs
m31102| Thu Jun 14 01:36:05 [rsSync] replSet initial sync data copy, starting syncup
m31102| Thu Jun 14 01:36:05 [rsSync] replSet initial sync building indexes
m31102| Thu Jun 14 01:36:05 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:36:05 [conn12] end connection 10.255.119.66:60592 (9 connections now open)
m31102| Thu Jun 14 01:36:05 [rsSync] replSet initial sync query minValid
m31102| Thu Jun 14 01:36:05 [rsSync] replSet initial sync finishing up
m31100| Thu Jun 14 01:36:05 [initandlisten] connection accepted from 10.255.119.66:60593 #13 (10 connections now open)
m31100| Thu Jun 14 01:36:05 [conn13] end connection 10.255.119.66:60593 (9 connections now open)
m31102| Thu Jun 14 01:36:06 [rsSync] replSet set minValid=4fd9783d:1
m31102| Thu Jun 14 01:36:06 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Thu Jun 14 01:36:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:36:06 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.0, size: 16MB, took 0.631 secs
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
m31101| Thu Jun 14 01:36:06 [rsSync] build index admin.foo { _id: 1 }
{
"ts" : Timestamp(1339652157000, 1),
"h" : NumberLong("5580237071076818961"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd9783ceec35d80ffdfc944"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31102 is 1339652157000:1 and latest is 1339652157000:1
m31100| Thu Jun 14 01:36:06 [conn10] end connection 10.255.119.66:60590 (8 connections now open)
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31102 is 1
m31101| Thu Jun 14 01:36:06 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:36:06 [rsSync] build index done. scanned 1 total records. 0 secs
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync cloning indexes for : admin
m31102| Thu Jun 14 01:36:06 [rsSync] replSet initial sync done
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync finishing up
m31100| Thu Jun 14 01:36:06 [conn11] end connection 10.255.119.66:60591 (7 connections now open)
m31100| Thu Jun 14 01:36:06 [initandlisten] connection accepted from 10.255.119.66:60594 #14 (8 connections now open)
m31100| Thu Jun 14 01:36:06 [conn14] end connection 10.255.119.66:60594 (7 connections now open)
m31101| Thu Jun 14 01:36:06 [rsSync] replSet set minValid=4fd9783d:1
m31101| Thu Jun 14 01:36:06 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:36:06 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:36:06 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:36:06 [conn9] end connection 10.255.119.66:60589 (6 connections now open)
m31101| Thu Jun 14 01:36:06 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:06 [initandlisten] connection accepted from 10.255.119.66:60595 #15 (7 connections now open)
m31102| Thu Jun 14 01:36:06 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:06 [initandlisten] connection accepted from 10.255.119.66:60596 #16 (8 connections now open)
m31102| Thu Jun 14 01:36:06 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:06 [initandlisten] connection accepted from 10.255.119.66:60597 #17 (9 connections now open)
m31102| Thu Jun 14 01:36:07 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:36:07 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:36:07 [initandlisten] connection accepted from 10.255.119.66:60598 #18 (10 connections now open)
m31101| Thu Jun 14 01:36:07 [rsSync] replSet SECONDARY
m31100| Thu Jun 14 01:36:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31100| Thu Jun 14 01:36:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31101| Thu Jun 14 01:36:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31100| Thu Jun 14 01:36:07 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Thu Jun 14 01:36:07 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:36:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
{
"ts" : Timestamp(1339652157000, 1),
"h" : NumberLong("5580237071076818961"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd9783ceec35d80ffdfc944"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652157000:1 and latest is 1339652157000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
{
"ts" : Timestamp(1339652157000, 1),
"h" : NumberLong("5580237071076818961"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd9783ceec35d80ffdfc944"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31102 is 1339652157000:1 and latest is 1339652157000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31102 is 1
ReplSetTest await synced=true
m31100| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 10.255.119.66:60599 #19 (11 connections now open)
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:36:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m29000| Thu Jun 14 01:36:08
m29000| Thu Jun 14 01:36:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:36:08
m29000| Thu Jun 14 01:36:08 [initandlisten] MongoDB starting : pid=25440 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:36:08 [initandlisten]
m29000| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:36:08 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:36:08 [initandlisten]
m29000| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:36:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:36:08 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:36:08 [initandlisten]
m29000| Thu Jun 14 01:36:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:36:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:36:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:36:08 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:36:08 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:36:08 [websvr] admin web console waiting for connections on port 30000
Resetting db path '/data/db/test-config1'
m29000| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 127.0.0.1:54651 #1 (1 connection now open)
Thu Jun 14 01:36:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29001 --dbpath /data/db/test-config1
m29001| Thu Jun 14 01:36:08
m29001| Thu Jun 14 01:36:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29001| Thu Jun 14 01:36:08
m29001| Thu Jun 14 01:36:08 [initandlisten] MongoDB starting : pid=25453 port=29001 dbpath=/data/db/test-config1 32-bit host=domU-12-31-39-01-70-B4
m29001| Thu Jun 14 01:36:08 [initandlisten]
m29001| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29001| Thu Jun 14 01:36:08 [initandlisten] ** Not recommended for production.
m29001| Thu Jun 14 01:36:08 [initandlisten]
m29001| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29001| Thu Jun 14 01:36:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29001| Thu Jun 14 01:36:08 [initandlisten] ** with --journal, the limit is lower
m29001| Thu Jun 14 01:36:08 [initandlisten]
m29001| Thu Jun 14 01:36:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29001| Thu Jun 14 01:36:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29001| Thu Jun 14 01:36:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29001| Thu Jun 14 01:36:08 [initandlisten] options: { dbpath: "/data/db/test-config1", port: 29001 }
m29001| Thu Jun 14 01:36:08 [initandlisten] waiting for connections on port 29001
m29001| Thu Jun 14 01:36:08 [websvr] admin web console waiting for connections on port 30001
Resetting db path '/data/db/test-config2'
m29001| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 127.0.0.1:32799 #1 (1 connection now open)
Thu Jun 14 01:36:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29002 --dbpath /data/db/test-config2
m29002| Thu Jun 14 01:36:08
m29002| Thu Jun 14 01:36:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29002| Thu Jun 14 01:36:08
m29002| Thu Jun 14 01:36:08 [initandlisten] MongoDB starting : pid=25466 port=29002 dbpath=/data/db/test-config2 32-bit host=domU-12-31-39-01-70-B4
m29002| Thu Jun 14 01:36:08 [initandlisten]
m29002| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29002| Thu Jun 14 01:36:08 [initandlisten] ** Not recommended for production.
m29002| Thu Jun 14 01:36:08 [initandlisten]
m29002| Thu Jun 14 01:36:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29002| Thu Jun 14 01:36:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29002| Thu Jun 14 01:36:08 [initandlisten] ** with --journal, the limit is lower
m29002| Thu Jun 14 01:36:08 [initandlisten]
m29002| Thu Jun 14 01:36:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29002| Thu Jun 14 01:36:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29002| Thu Jun 14 01:36:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29002| Thu Jun 14 01:36:08 [initandlisten] options: { dbpath: "/data/db/test-config2", port: 29002 }
m29002| Thu Jun 14 01:36:08 [initandlisten] waiting for connections on port 29002
m29002| Thu Jun 14 01:36:08 [websvr] admin web console waiting for connections on port 30002
"domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002"
m29002| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 127.0.0.1:54466 #1 (1 connection now open)
Thu Jun 14 01:36:08 SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m29000| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 10.255.119.66:52751 #2 (2 connections now open)
Thu Jun 14 01:36:08 SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
Thu Jun 14 01:36:08 SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29002| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 10.255.119.66:35176 #2 (2 connections now open)
m29000| Thu Jun 14 01:36:08 [conn2] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:08 [initandlisten] connection accepted from 10.255.119.66:57347 #2 (2 connections now open)
m29001| Thu Jun 14 01:36:08 [conn2] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:08 [conn2] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:08 [FileAllocator] allocating new datafile /data/db/test-config1/config.ns, filling with zeroes...
m29001| Thu Jun 14 01:36:08 [FileAllocator] creating directory /data/db/test-config1/_tmp
m29000| Thu Jun 14 01:36:08 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:36:08 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29002| Thu Jun 14 01:36:08 [FileAllocator] allocating new datafile /data/db/test-config2/config.ns, filling with zeroes...
m29002| Thu Jun 14 01:36:08 [FileAllocator] creating directory /data/db/test-config2/_tmp
m29000| Thu Jun 14 01:36:09 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.879 secs
m29000| Thu Jun 14 01:36:09 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29002| Thu Jun 14 01:36:09 [FileAllocator] done allocating datafile /data/db/test-config2/config.ns, size: 16MB, took 0.817 secs
m29001| Thu Jun 14 01:36:09 [FileAllocator] done allocating datafile /data/db/test-config1/config.ns, size: 16MB, took 0.89 secs
m29001| Thu Jun 14 01:36:09 [FileAllocator] allocating new datafile /data/db/test-config1/config.0, filling with zeroes...
m29002| Thu Jun 14 01:36:09 [FileAllocator] allocating new datafile /data/db/test-config2/config.0, filling with zeroes...
m29000| Thu Jun 14 01:36:10 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 1.024 secs
m29000| Thu Jun 14 01:36:10 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:36:10 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:10 [conn2] insert config.settings keyUpdates:0 locks(micros) W:4 w:1934856 1934ms
m29000| Thu Jun 14 01:36:10 [conn2] fsync from getlasterror
m29000| Thu Jun 14 01:36:10 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29001| Thu Jun 14 01:36:10 [FileAllocator] done allocating datafile /data/db/test-config1/config.0, size: 16MB, took 1.196 secs
m29002| Thu Jun 14 01:36:10 [FileAllocator] done allocating datafile /data/db/test-config2/config.0, size: 16MB, took 0.948 secs
m29002| Thu Jun 14 01:36:10 [FileAllocator] allocating new datafile /data/db/test-config2/config.1, filling with zeroes...
m29002| Thu Jun 14 01:36:10 [conn2] build index config.settings { _id: 1 }
m29002| Thu Jun 14 01:36:10 [conn2] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:10 [conn2] insert config.settings keyUpdates:0 locks(micros) W:2 w:2105630 2105ms
m29001| Thu Jun 14 01:36:10 [FileAllocator] allocating new datafile /data/db/test-config1/config.1, filling with zeroes...
m29001| Thu Jun 14 01:36:10 [conn2] build index config.settings { _id: 1 }
m29001| Thu Jun 14 01:36:10 [conn2] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:10 [conn2] insert config.settings keyUpdates:0 locks(micros) W:5 w:2131977 2131ms
m29001| Thu Jun 14 01:36:12 [FileAllocator] done allocating datafile /data/db/test-config1/config.1, size: 32MB, took 1.812 secs
m29000| Thu Jun 14 01:36:12 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 1.977 secs
m29002| Thu Jun 14 01:36:12 [FileAllocator] done allocating datafile /data/db/test-config2/config.1, size: 32MB, took 1.985 secs
m29000| Thu Jun 14 01:36:12 [conn2] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:4 w:1934856 reslen:83 1998ms
m29001| Thu Jun 14 01:36:12 [conn2] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn2] fsync from getlasterror
ShardingTest test :
{
"config" : "domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002",
"shards" : [
connection to test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
]
}
Thu Jun 14 01:36:12 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002
m30999| Thu Jun 14 01:36:12 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25485 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:36:12 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:36:12 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:36:12 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002", port: 30999 }
m29000| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:52755 #3 (3 connections now open)
m29001| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:57351 #3 (3 connections now open)
m29002| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:35180 #3 (3 connections now open)
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m29000| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:52758 #4 (4 connections now open)
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:57354 #4 (4 connections now open)
m29002| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:35183 #4 (4 connections now open)
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m29000| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:52761 #5 (5 connections now open)
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29001| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:57357 #5 (5 connections now open)
m30999| Thu Jun 14 01:36:12 [mongosMain] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29002| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:35186 #5 (5 connections now open)
m29002| Thu Jun 14 01:36:12 [conn5] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn5] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn5] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn5] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn5] build index config.version { _id: 1 }
m29002| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn5] fsync from getlasterror
m29001| Thu Jun 14 01:36:12 [conn5] build index config.version { _id: 1 }
m29001| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn5] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn5] fsync from getlasterror
m29000| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn4] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] build index config.chunks { _id: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] info: creating collection config.chunks on add index
m29002| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, min: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:12 [conn4] build index config.chunks { _id: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] info: creating collection config.chunks on add index
m29001| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, min: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29000| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29000| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29000| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:12 [conn4] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:36:12 [conn4] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] build index config.shards { _id: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn4] info: creating collection config.shards on add index
m29001| Thu Jun 14 01:36:12 [conn4] build index config.shards { host: 1 }
m29001| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] build index config.shards { _id: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn4] info: creating collection config.shards on add index
m29002| Thu Jun 14 01:36:12 [conn4] build index config.shards { host: 1 }
m29002| Thu Jun 14 01:36:12 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:12 [conn4] fsync from getlasterror
m29000| Thu Jun 14 01:36:12 [conn5] build index config.mongos { _id: 1 }
m30999| Thu Jun 14 01:36:12 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:36:12 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:36:12 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:36:12 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:36:12
m30999| Thu Jun 14 01:36:12 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:12 [conn5] build index config.mongos { _id: 1 }
m29001| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:12 [conn5] build index config.mongos { _id: 1 }
m29002| Thu Jun 14 01:36:12 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:36:12 [Balancer] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:36:12 [mongosMain] waiting for connections on port 30999
m29000| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:52765 #6 (6 connections now open)
m30999| Thu Jun 14 01:36:12 [Balancer] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29001| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:57361 #6 (6 connections now open)
m30999| Thu Jun 14 01:36:12 [Balancer] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29002| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:35190 #6 (6 connections now open)
m29000| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:52768 #7 (7 connections now open)
m29001| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:57364 #7 (7 connections now open)
m29002| Thu Jun 14 01:36:12 [initandlisten] connection accepted from 10.255.119.66:35193 #7 (7 connections now open)
m29000| Thu Jun 14 01:36:12 [conn6] CMD fsync: sync:1 lock:0
m30999| Thu Jun 14 01:36:12 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002 and process domU-12-31-39-01-70-B4:30999:1339652172:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn6] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn6] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:12 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:36:13 [conn6] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:13 [conn6] build index config.locks { _id: 1 }
m29001| Thu Jun 14 01:36:13 [conn6] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:13 [conn6] build index config.locks { _id: 1 }
m29002| Thu Jun 14 01:36:13 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { _id: 1 }
m29001| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { _id: 1 }
m29002| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m30999| Thu Jun 14 01:36:13 [mongosMain] connection accepted from 127.0.0.1:53323 #1 (1 connection now open)
ShardingTest undefined going to add shard : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m29000| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:52772 #8 (8 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:57368 #8 (8 connections now open)
m29002| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:35197 #8 (8 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m29000| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:52775 #9 (9 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m30999| Thu Jun 14 01:36:13 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:57371 #9 (9 connections now open)
m29002| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:35200 #9 (9 connections now open)
m29000| Thu Jun 14 01:36:13 [conn8] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn8] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn8] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn8] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:36:13 [conn8] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:13 [conn8] build index config.databases { _id: 1 }
m29001| Thu Jun 14 01:36:13 [conn8] build index done. scanned 0 total records. 0 secs
m29002| Thu Jun 14 01:36:13 [conn8] build index config.databases { _id: 1 }
m29002| Thu Jun 14 01:36:13 [conn8] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:13 [conn8] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn8] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn8] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:36:13 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:36:13 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002
m30999| Thu Jun 14 01:36:13 [conn] starting new replica set monitor for replica set test-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m29000| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { ping: 1 }
m29001| Thu Jun 14 01:36:13 [conn4] build index done. scanned 1 total records. 0 secs
m29002| Thu Jun 14 01:36:13 [conn4] build index config.lockpings { ping: 1 }
m29002| Thu Jun 14 01:36:13 [conn4] build index done. scanned 1 total records. 0 secs
m31100| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:60633 #20 (12 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set test-rs0
m30999| Thu Jun 14 01:36:13 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/
m30999| Thu Jun 14 01:36:13 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set test-rs0
m31100| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:60634 #21 (13 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set test-rs0
m30999| Thu Jun 14 01:36:13 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set test-rs0
m30999| Thu Jun 14 01:36:13 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set test-rs0
m30999| Thu Jun 14 01:36:13 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m31101| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:45003 #8 (7 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m31102| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:45932 #7 (7 connections now open)
m31100| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:60637 #22 (14 connections now open)
m31100| Thu Jun 14 01:36:13 [conn20] end connection 10.255.119.66:60633 (13 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:45006 #9 (8 connections now open)
m31102| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:45935 #8 (8 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] replica set monitor for replica set test-rs0 started, address is test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:36:13 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:60640 #23 (14 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] going to add shard: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }
m29000| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m30999| Thu Jun 14 01:36:13 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652172:1804289383' acquired, ts : 4fd9784dc0e10775308cf609
m29000| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn6] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn6] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m30999| Thu Jun 14 01:36:13 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652172:1804289383' unlocked.
{ "shardAdded" : "test-rs0", "ok" : 1 }
m30999| Thu Jun 14 01:36:13 [conn] couldn't find database [test] in config db
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m30999| Thu Jun 14 01:36:13 [conn] put [test] on: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:36:13 [conn] enabling sharding on: test
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m30999| Thu Jun 14 01:36:13 [conn] CMD: shardcollection: { shardCollection: "test.user", key: { x: 1.0 } }
m30999| Thu Jun 14 01:36:13 [conn] enable sharding on: test.user with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:36:13 [conn] going to create 1 chunk(s) for: test.user using new epoch 4fd9784dc0e10775308cf60a
m31100| Thu Jun 14 01:36:13 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.ns, filling with zeroes...
m29002| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn9] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn9] fsync from getlasterror
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m30999| Thu Jun 14 01:36:13 [conn] ChunkManager: time to load chunks for test.user: 2ms sequenceNumber: 2 version: 1|0||4fd9784dc0e10775308cf60a based on: (empty)
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:32 w:4125 reslen:101 251ms
m29002| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:13 [conn4] build index config.collections { _id: 1 }
m29002| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:13 [conn4] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29002| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m29001| Thu Jun 14 01:36:13 [conn4] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:13 [conn4] build index config.collections { _id: 1 }
m29001| Thu Jun 14 01:36:13 [conn4] build index done. scanned 0 total records. 0 secs
m29001| Thu Jun 14 01:36:13 [conn4] fsync from getlasterror
m31100| Thu Jun 14 01:36:13 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.ns, size: 16MB, took 0.405 secs
m31100| Thu Jun 14 01:36:13 [initandlisten] connection accepted from 10.255.119.66:60641 #24 (15 connections now open)
m30999| Thu Jun 14 01:36:13 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd9784cc0e10775308cf608
m30999| Thu Jun 14 01:36:13 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd9784cc0e10775308cf608
m30999| Thu Jun 14 01:36:13 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31102 serverID: 4fd9784cc0e10775308cf608
m31100| Thu Jun 14 01:36:13 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.0, filling with zeroes...
m31100| Thu Jun 14 01:36:14 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.0, size: 16MB, took 0.355 secs
m31100| Thu Jun 14 01:36:14 [conn23] build index test.user { _id: 1 }
m31100| Thu Jun 14 01:36:14 [conn23] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:36:14 [conn23] info: creating collection test.user on add index
m31100| Thu Jun 14 01:36:14 [conn23] build index test.user { x: 1.0 }
m31100| Thu Jun 14 01:36:14 [conn23] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:36:14 [conn23] insert test.system.indexes keyUpdates:0 locks(micros) R:5 W:69 r:204 w:770982 770ms
m31100| Thu Jun 14 01:36:14 [conn24] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002", serverID: ObjectId('4fd9784cc0e10775308cf608'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:775 reslen:51 348ms
m31100| Thu Jun 14 01:36:14 [conn24] no current chunk manager found for this shard, will initialize
m31100| Thu Jun 14 01:36:14 [conn15] getmore local.oplog.rs query: { ts: { $gte: new Date(5753762202330857473) } } cursorid:5666766282163082419 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:892 nreturned:1 reslen:147 2234ms
m31100| Thu Jun 14 01:36:14 [conn24] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m31100| Thu Jun 14 01:36:14 [conn24] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29000| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:52787 #10 (10 connections now open)
m31100| Thu Jun 14 01:36:14 [conn24] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:57383 #10 (10 connections now open)
m29002| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:35212 #10 (10 connections now open)
m30999| Thu Jun 14 01:36:14 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:36:14 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29000| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:52790 #11 (11 connections now open)
m29001| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:57386 #11 (11 connections now open)
m30999| Thu Jun 14 01:36:14 [conn] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29002| Thu Jun 14 01:36:14 [initandlisten] connection accepted from 10.255.119.66:35215 #11 (11 connections now open)
m31102| Thu Jun 14 01:36:15 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.ns, filling with zeroes...
m31101| Thu Jun 14 01:36:15 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.ns, filling with zeroes...
m31102| Thu Jun 14 01:36:15 [conn3] end connection 10.255.119.66:53439 (7 connections now open)
m31102| Thu Jun 14 01:36:15 [initandlisten] connection accepted from 10.255.119.66:45944 #9 (8 connections now open)
m31101| Thu Jun 14 01:36:15 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.ns, size: 16MB, took 0.596 secs
m31101| Thu Jun 14 01:36:15 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.0, filling with zeroes...
m31102| Thu Jun 14 01:36:15 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.ns, size: 16MB, took 0.62 secs
m31102| Thu Jun 14 01:36:15 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.0, filling with zeroes...
m31101| Thu Jun 14 01:36:16 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.0, size: 16MB, took 0.687 secs
m31101| Thu Jun 14 01:36:16 [rsSync] build index test.user { _id: 1 }
m31101| Thu Jun 14 01:36:16 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:36:16 [rsSync] info: creating collection test.user on add index
m31101| Thu Jun 14 01:36:16 [rsSync] build index test.user { x: 1.0 }
m31101| Thu Jun 14 01:36:16 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:36:16 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.0, size: 16MB, took 0.683 secs
m31102| Thu Jun 14 01:36:16 [rsSync] build index test.user { _id: 1 }
m31102| Thu Jun 14 01:36:16 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:36:16 [rsSync] info: creating collection test.user on add index
m31102| Thu Jun 14 01:36:16 [rsSync] build index test.user { x: 1.0 }
m31102| Thu Jun 14 01:36:16 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:36:17 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:137 reslen:94 3228ms
m31102| Thu Jun 14 01:36:17 [conn5] end connection 10.255.119.66:45877 (7 connections now open)
m31102| Thu Jun 14 01:36:17 [initandlisten] connection accepted from 10.255.119.66:45945 #10 (8 connections now open)
m31101| Thu Jun 14 01:36:18 [conn5] end connection 10.255.119.66:44950 (7 connections now open)
m31101| Thu Jun 14 01:36:18 [initandlisten] connection accepted from 10.255.119.66:45018 #10 (8 connections now open)
m31100| Thu Jun 14 01:36:19 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:255 reslen:94 2002ms
m31100| Thu Jun 14 01:36:21 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:335 reslen:94 2002ms
m31100| Thu Jun 14 01:36:23 [initandlisten] connection accepted from 10.255.119.66:60651 #25 (16 connections now open)
m31101| Thu Jun 14 01:36:23 [initandlisten] connection accepted from 10.255.119.66:45020 #11 (9 connections now open)
m31102| Thu Jun 14 01:36:23 [initandlisten] connection accepted from 10.255.119.66:45949 #11 (9 connections now open)
m29000| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29000| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29002| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29001| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29000| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29002| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29001| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m30999| Thu Jun 14 01:36:23 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652172:1804289383' acquired, ts : 4fd97857c0e10775308cf60b
m29000| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m29002| Thu Jun 14 01:36:23 [conn5] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:23 [conn5] fsync from getlasterror
m30999| Thu Jun 14 01:36:23 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652172:1804289383' unlocked.
m31100| Thu Jun 14 01:36:23 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:448 reslen:94 2002ms
m31100| Thu Jun 14 01:36:25 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:533 reslen:94 2002ms
m31100| Thu Jun 14 01:36:27 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:609 reslen:94 2002ms
m31100| Thu Jun 14 01:36:29 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:715 reslen:94 2002ms
m31100| Thu Jun 14 01:36:31 [conn24] command admin.$cmd command: { getLastError: 1.0, w: "majority" } ntoreturn:1 keyUpdates:0 locks(micros) W:788 w:801 reslen:94 2002ms
m31100| Thu Jun 14 01:36:31 [conn23] request split points lookup for chunk test.user { : MinKey } -->> { : MaxKey }
m31100| Thu Jun 14 01:36:31 [conn23] max number of requested split points reached (2) before the end of chunk test.user { : MinKey } -->> { : MaxKey }
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29000| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:52799 #12 (12 connections now open)
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:57395 #12 (12 connections now open)
m29002| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:35224 #12 (12 connections now open)
m31100| Thu Jun 14 01:36:31 [conn23] received splitChunk request: { splitChunk: "test.user", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "test-rs0", splitKeys: [ { x: 0.0 } ], shardId: "test.user-x_MinKey", configdb: "domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002" }
m31100| Thu Jun 14 01:36:31 [conn23] created new distributed lock for test.user on domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:52802 #13 (13 connections now open)
m29001| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:57398 #13 (13 connections now open)
m29002| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:35227 #13 (13 connections now open)
m31100| Thu Jun 14 01:36:31 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000,domU-12-31-39-01-70-B4:29001,domU-12-31-39-01-70-B4:29002 and process domU-12-31-39-01-70-B4:31100:1339652191:824339432 (sleeping for 30000ms)
m29000| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn10] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn10] fsync from getlasterror
m31100| Thu Jun 14 01:36:31 [conn23] distributed lock 'test.user/domU-12-31-39-01-70-B4:31100:1339652191:824339432' acquired, ts : 4fd9785fa0c5868a8d41dd54
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29000]
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29001]
m29000| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:52805 #14 (14 connections now open)
m31100| Thu Jun 14 01:36:31 [conn23] SyncClusterConnection connecting to [domU-12-31-39-01-70-B4:29002]
m29001| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:57401 #14 (14 connections now open)
m29002| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:35230 #14 (14 connections now open)
m29000| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m31100| Thu Jun 14 01:36:31 [conn23] splitChunk accepted at version 1|0||4fd9784dc0e10775308cf60a
m29002| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m31100| Thu Jun 14 01:36:31 [conn23] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:36:31-0", server: "domU-12-31-39-01-70-B4", clientAddr: "10.255.119.66:60640", time: new Date(1339652191521), what: "split", ns: "test.user", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9784dc0e10775308cf60a') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9784dc0e10775308cf60a') } } }
m29000| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29002| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29001| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn14] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn14] fsync from getlasterror
m29000| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29000| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] CMD fsync: sync:1 lock:0
m29001| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m29002| Thu Jun 14 01:36:31 [conn12] fsync from getlasterror
m31100| Thu Jun 14 01:36:31 [conn23] distributed lock 'test.user/domU-12-31-39-01-70-B4:31100:1339652191:824339432' unlocked.
m30999| Thu Jun 14 01:36:31 [conn] ChunkManager: time to load chunks for test.user: 0ms sequenceNumber: 3 version: 1|2||4fd9784dc0e10775308cf60a based on: 1|0||4fd9784dc0e10775308cf60a
m30999| Thu Jun 14 01:36:31 [conn] autosplitted test.user shard: ns:test.user at: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:36:31 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:36:31 [conn3] end connection 10.255.119.66:52755 (13 connections now open)
m29000| Thu Jun 14 01:36:31 [conn4] end connection 10.255.119.66:52758 (12 connections now open)
m29001| Thu Jun 14 01:36:31 [conn3] end connection 10.255.119.66:57351 (13 connections now open)
m29001| Thu Jun 14 01:36:31 [conn4] end connection 10.255.119.66:57354 (13 connections now open)
m29002| Thu Jun 14 01:36:31 [conn3] end connection 10.255.119.66:35180 (13 connections now open)
m29002| Thu Jun 14 01:36:31 [conn4] end connection 10.255.119.66:35183 (13 connections now open)
m29002| Thu Jun 14 01:36:31 [conn5] end connection 10.255.119.66:35186 (11 connections now open)
m29000| Thu Jun 14 01:36:31 [conn6] end connection 10.255.119.66:52765 (11 connections now open)
m29001| Thu Jun 14 01:36:31 [conn6] end connection 10.255.119.66:57361 (11 connections now open)
m29001| Thu Jun 14 01:36:31 [conn5] end connection 10.255.119.66:57357 (11 connections now open)
m29002| Thu Jun 14 01:36:31 [conn6] end connection 10.255.119.66:35190 (10 connections now open)
m29002| Thu Jun 14 01:36:31 [conn7] end connection 10.255.119.66:35193 (9 connections now open)
m29002| Thu Jun 14 01:36:31 [conn8] end connection 10.255.119.66:35197 (9 connections now open)
m29000| Thu Jun 14 01:36:31 [conn7] end connection 10.255.119.66:52768 (10 connections now open)
m29000| Thu Jun 14 01:36:31 [conn8] end connection 10.255.119.66:52772 (9 connections now open)
m29001| Thu Jun 14 01:36:31 [conn7] end connection 10.255.119.66:57364 (10 connections now open)
m29001| Thu Jun 14 01:36:31 [conn8] end connection 10.255.119.66:57368 (10 connections now open)
m31100| Thu Jun 14 01:36:31 [conn21] end connection 10.255.119.66:60634 (15 connections now open)
m31100| Thu Jun 14 01:36:31 [conn23] end connection 10.255.119.66:60640 (14 connections now open)
m31100| Thu Jun 14 01:36:31 [conn24] end connection 10.255.119.66:60641 (13 connections now open)
m31101| Thu Jun 14 01:36:31 [conn8] end connection 10.255.119.66:45003 (8 connections now open)
m29000| Thu Jun 14 01:36:31 [conn11] end connection 10.255.119.66:52790 (8 connections now open)
m29001| Thu Jun 14 01:36:31 [conn11] end connection 10.255.119.66:57386 (7 connections now open)
m31102| Thu Jun 14 01:36:31 [conn7] end connection 10.255.119.66:45932 (8 connections now open)
m29002| Thu Jun 14 01:36:31 [conn11] end connection 10.255.119.66:35215 (7 connections now open)
m29000| Thu Jun 14 01:36:31 [conn5] end connection 10.255.119.66:52761 (7 connections now open)
m29002| Thu Jun 14 01:36:31 [conn9] end connection 10.255.119.66:35200 (6 connections now open)
m29000| Thu Jun 14 01:36:31 [conn9] end connection 10.255.119.66:52775 (6 connections now open)
m29001| Thu Jun 14 01:36:31 [conn9] end connection 10.255.119.66:57371 (6 connections now open)
m31101| Thu Jun 14 01:36:31 [conn11] end connection 10.255.119.66:45020 (7 connections now open)
m31100| Thu Jun 14 01:36:31 [conn25] end connection 10.255.119.66:60651 (12 connections now open)
m31102| Thu Jun 14 01:36:31 [conn11] end connection 10.255.119.66:45949 (7 connections now open)
m31101| Thu Jun 14 01:36:31 [conn7] end connection 10.255.119.66:44954 (6 connections now open)
m31101| Thu Jun 14 01:36:31 [initandlisten] connection accepted from 10.255.119.66:45031 #12 (7 connections now open)
Thu Jun 14 01:36:32 shell: stopped mongo program on port 30999
Thu Jun 14 01:36:32 No db started on port: 30000
Thu Jun 14 01:36:32 shell: stopped mongo program on port 30000
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:36:32 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:36:32 [interruptThread] now exiting
m31100| Thu Jun 14 01:36:32 dbexit:
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:36:32 [interruptThread] closing listening socket: 34
m31100| Thu Jun 14 01:36:32 [interruptThread] closing listening socket: 35
m31100| Thu Jun 14 01:36:32 [interruptThread] closing listening socket: 44
m31100| Thu Jun 14 01:36:32 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:36:32 [conn9] end connection 10.255.119.66:45944 (6 connections now open)
m31101| Thu Jun 14 01:36:32 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:36:32 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m29000| Thu Jun 14 01:36:32 [conn13] end connection 10.255.119.66:52802 (5 connections now open)
m31102| Thu Jun 14 01:36:32 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m29002| Thu Jun 14 01:36:32 [conn12] end connection 10.255.119.66:35224 (5 connections now open)
m29000| Thu Jun 14 01:36:32 [conn12] end connection 10.255.119.66:52799 (4 connections now open)
m29002| Thu Jun 14 01:36:32 [conn13] end connection 10.255.119.66:35227 (4 connections now open)
m29001| Thu Jun 14 01:36:32 [conn12] end connection 10.255.119.66:57395 (5 connections now open)
m29001| Thu Jun 14 01:36:32 [conn13] end connection 10.255.119.66:57398 (4 connections now open)
m29001| Thu Jun 14 01:36:32 [conn14] end connection 10.255.119.66:57401 (3 connections now open)
m31101| Thu Jun 14 01:36:32 [conn12] end connection 10.255.119.66:45031 (6 connections now open)
m29000| Thu Jun 14 01:36:32 [conn14] end connection 10.255.119.66:52805 (3 connections now open)
m29002| Thu Jun 14 01:36:32 [conn14] end connection 10.255.119.66:35230 (3 connections now open)
m29002| Thu Jun 14 01:36:32 [conn10] end connection 10.255.119.66:35212 (2 connections now open)
m29001| Thu Jun 14 01:36:32 [conn10] end connection 10.255.119.66:57383 (2 connections now open)
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:36:32 [conn1] end connection 10.255.119.66:43652 (11 connections now open)
m29000| Thu Jun 14 01:36:32 [conn10] end connection 10.255.119.66:52787 (2 connections now open)
m31100| Thu Jun 14 01:36:32 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:36:32 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:36:32 dbexit: really exiting now
m31102| Thu Jun 14 01:36:33 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:36:33 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:36:33 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:36:33 [interruptThread] now exiting
m31101| Thu Jun 14 01:36:33 dbexit:
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:36:33 [interruptThread] closing listening socket: 45
m31101| Thu Jun 14 01:36:33 [interruptThread] closing listening socket: 48
m31101| Thu Jun 14 01:36:33 [interruptThread] closing listening socket: 49
m31101| Thu Jun 14 01:36:33 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:36:33 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:36:33 [conn1] end connection 10.255.119.66:38030 (5 connections now open)
m31101| Thu Jun 14 01:36:33 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:36:33 [conn10] end connection 10.255.119.66:45945 (5 connections now open)
m31101| Thu Jun 14 01:36:33 dbexit: really exiting now
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31101 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31101 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state DOWN
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] couldn't connect to domU-12-31-39-01-70-B4:31100: couldn't connect to server domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): socket exception
m31102| Thu Jun 14 01:36:34 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31102| Thu Jun 14 01:36:34 [rsMgr] replSet can't see a majority, will not try to elect self
m31102| Thu Jun 14 01:36:34 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:36:34 [rsSyncNotifier] Socket flush send() errno:32 Broken pipe 10.255.119.66:31100
m31102| Thu Jun 14 01:36:34 [rsSyncNotifier] caught exception (socket exception) in destructor (~PiggyBackData)
m31102| Thu Jun 14 01:36:34 [rsSyncNotifier] repl: couldn't connect to server domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:36:34 shell: stopped mongo program on port 31101
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
ReplSetTest stop *** Shutting down mongod in port 31102 ***
m31102| Thu Jun 14 01:36:34 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:36:34 [interruptThread] now exiting
m31102| Thu Jun 14 01:36:34 dbexit:
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:36:34 [interruptThread] closing listening socket: 49
m31102| Thu Jun 14 01:36:34 [interruptThread] closing listening socket: 51
m31102| Thu Jun 14 01:36:34 [interruptThread] closing listening socket: 53
m31102| Thu Jun 14 01:36:34 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:36:34 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:36:34 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:36:34 [conn1] end connection 10.255.119.66:53436 (4 connections now open)
m31102| Thu Jun 14 01:36:34 dbexit: really exiting now
Thu Jun 14 01:36:35 shell: stopped mongo program on port 31102
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:36:35 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:36:35 [interruptThread] now exiting
m29000| Thu Jun 14 01:36:35 dbexit:
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:36:35 [interruptThread] closing listening socket: 55
m29000| Thu Jun 14 01:36:35 [interruptThread] closing listening socket: 56
m29000| Thu Jun 14 01:36:35 [interruptThread] closing listening socket: 57
m29000| Thu Jun 14 01:36:35 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:36:35 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:36:35 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:36:35 dbexit: really exiting now
Thu Jun 14 01:36:36 shell: stopped mongo program on port 29000
m29001| Thu Jun 14 01:36:36 got signal 15 (Terminated), will terminate after current cmd ends
m29001| Thu Jun 14 01:36:36 [interruptThread] now exiting
m29001| Thu Jun 14 01:36:36 dbexit:
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: going to close listening sockets...
m29001| Thu Jun 14 01:36:36 [interruptThread] closing listening socket: 58
m29001| Thu Jun 14 01:36:36 [interruptThread] closing listening socket: 59
m29001| Thu Jun 14 01:36:36 [interruptThread] closing listening socket: 60
m29001| Thu Jun 14 01:36:36 [interruptThread] removing socket file: /tmp/mongodb-29001.sock
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: going to flush diaglog...
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: going to close sockets...
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: waiting for fs preallocator...
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: closing all files...
m29001| Thu Jun 14 01:36:36 [interruptThread] closeAllFiles() finished
m29001| Thu Jun 14 01:36:36 [interruptThread] shutdown: removing fs lock...
m29001| Thu Jun 14 01:36:36 dbexit: really exiting now
Thu Jun 14 01:36:37 shell: stopped mongo program on port 29001
m29002| Thu Jun 14 01:36:37 got signal 15 (Terminated), will terminate after current cmd ends
m29002| Thu Jun 14 01:36:37 [interruptThread] now exiting
m29002| Thu Jun 14 01:36:37 dbexit:
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: going to close listening sockets...
m29002| Thu Jun 14 01:36:37 [interruptThread] closing listening socket: 61
m29002| Thu Jun 14 01:36:37 [interruptThread] closing listening socket: 62
m29002| Thu Jun 14 01:36:37 [interruptThread] closing listening socket: 63
m29002| Thu Jun 14 01:36:37 [interruptThread] removing socket file: /tmp/mongodb-29002.sock
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: going to flush diaglog...
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: going to close sockets...
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: waiting for fs preallocator...
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: closing all files...
m29002| Thu Jun 14 01:36:37 [interruptThread] closeAllFiles() finished
m29002| Thu Jun 14 01:36:37 [interruptThread] shutdown: removing fs lock...
m29002| Thu Jun 14 01:36:37 dbexit: really exiting now
Thu Jun 14 01:36:38 shell: stopped mongo program on port 29002
*** ShardingTest test completed successfully in 61.207 seconds ***
106184.427023ms
Thu Jun 14 01:36:38 [initandlisten] connection accepted from 127.0.0.1:34952 #33 (20 connections now open)
*******************************************
Test : gridfs.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/gridfs.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/gridfs.js";TestData.testFile = "gridfs.js";TestData.testName = "gridfs";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:36:38 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:36:38 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:36:38
m30000| Thu Jun 14 01:36:38 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:36:38
m30000| Thu Jun 14 01:36:38 [initandlisten] MongoDB starting : pid=25618 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:36:38 [initandlisten]
m30000| Thu Jun 14 01:36:38 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:36:38 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:36:38 [initandlisten]
m30000| Thu Jun 14 01:36:38 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:36:38 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:36:38 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:36:38 [initandlisten]
m30000| Thu Jun 14 01:36:38 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:36:38 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:36:38 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:36:38 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:36:38 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:36:38 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:36:38 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:36:38 [initandlisten] connection accepted from 127.0.0.1:56486 #1 (1 connection now open)
m30001| Thu Jun 14 01:36:39
m30001| Thu Jun 14 01:36:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:36:39
m30001| Thu Jun 14 01:36:39 [initandlisten] MongoDB starting : pid=25631 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:36:39 [initandlisten]
m30001| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:36:39 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:36:39 [initandlisten]
m30001| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:36:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:36:39 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:36:39 [initandlisten]
m30001| Thu Jun 14 01:36:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:36:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:36:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:36:39 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:36:39 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:36:39 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test2'
Thu Jun 14 01:36:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/test2
m30001| Thu Jun 14 01:36:39 [initandlisten] connection accepted from 127.0.0.1:44391 #1 (1 connection now open)
m30002| Thu Jun 14 01:36:39
m30002| Thu Jun 14 01:36:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:36:39
m30002| Thu Jun 14 01:36:39 [initandlisten] MongoDB starting : pid=25644 port=30002 dbpath=/data/db/test2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:36:39 [initandlisten]
m30002| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:36:39 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:36:39 [initandlisten]
m30002| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:36:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:36:39 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:36:39 [initandlisten]
m30002| Thu Jun 14 01:36:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:36:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:36:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:36:39 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
m30002| Thu Jun 14 01:36:39 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:36:39 [websvr] admin web console waiting for connections on port 31002
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:36:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m30002| Thu Jun 14 01:36:39 [initandlisten] connection accepted from 127.0.0.1:45486 #1 (1 connection now open)
m29000| Thu Jun 14 01:36:39
m29000| Thu Jun 14 01:36:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:36:39
m29000| Thu Jun 14 01:36:39 [initandlisten] MongoDB starting : pid=25656 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:36:39 [initandlisten]
m29000| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:36:39 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:36:39 [initandlisten]
m29000| Thu Jun 14 01:36:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:36:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:36:39 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:36:39 [initandlisten]
m29000| Thu Jun 14 01:36:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:36:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:36:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:36:39 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:36:39 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:36:39 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:36:39 [websvr] ERROR: addr already in use
"localhost:29000"
m29000| Thu Jun 14 01:36:39 [initandlisten] connection accepted from 127.0.0.1:54726 #1 (1 connection now open)
m29000| Thu Jun 14 01:36:39 [initandlisten] connection accepted from 127.0.0.1:54727 #2 (2 connections now open)
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
m29000| Thu Jun 14 01:36:39 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:36:39 [FileAllocator] creating directory /data/db/test-config0/_tmp
Thu Jun 14 01:36:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000
m30999| Thu Jun 14 01:36:39 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:36:39 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25672 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:36:39 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:36:39 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:36:39 [mongosMain] options: { configdb: "localhost:29000", port: 30999 }
m29000| Thu Jun 14 01:36:39 [initandlisten] connection accepted from 127.0.0.1:54729 #3 (3 connections now open)
m29000| Thu Jun 14 01:36:39 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.287 secs
m29000| Thu Jun 14 01:36:39 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:36:40 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.295 secs
m29000| Thu Jun 14 01:36:40 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:36:40 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn2] insert config.settings keyUpdates:0 locks(micros) w:600632 600ms
m29000| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:54732 #4 (4 connections now open)
m29000| Thu Jun 14 01:36:40 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:54733 #5 (5 connections now open)
m30999| Thu Jun 14 01:36:40 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:36:40 [websvr] admin web console waiting for connections on port 31999
m29000| Thu Jun 14 01:36:40 [conn5] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn5] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:36:40 [conn5] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn5] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn5] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn5] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:36:40 [conn5] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:36:40 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:36:40 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:36:40 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:36:40 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:36:40
m30999| Thu Jun 14 01:36:40 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:36:40 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:54734 #6 (6 connections now open)
m29000| Thu Jun 14 01:36:40 [conn6] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:36:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd978686dfcc1afddb6495d
m30999| Thu Jun 14 01:36:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
m30999| Thu Jun 14 01:36:40 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339652200:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:36:40 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:36:40 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:36:40 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:36:40 [mongosMain] connection accepted from 127.0.0.1:53382 #1 (1 connection now open)
m30999| Thu Jun 14 01:36:40 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:36:40 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:36:40 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:36:40 [conn] put [admin] on: config:localhost:29000
m30999| Thu Jun 14 01:36:40 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:44406 #2 (2 connections now open)
m30999| Thu Jun 14 01:36:40 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:45500 #2 (2 connections now open)
m30999| Thu Jun 14 01:36:40 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
**** unsharded ****
m30999| Thu Jun 14 01:36:40 [conn] couldn't find database [unsharded] in config db
m30999| Thu Jun 14 01:36:40 [conn] put [unsharded] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:36:40 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978686dfcc1afddb6495c
m30001| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:44409 #3 (3 connections now open)
m30999| Thu Jun 14 01:36:40 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978686dfcc1afddb6495c
m30002| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:45503 #3 (3 connections now open)
m30999| Thu Jun 14 01:36:40 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd978686dfcc1afddb6495c
m30000| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:56502 #2 (2 connections now open)
m30000| Thu Jun 14 01:36:40 [initandlisten] connection accepted from 127.0.0.1:56505 #3 (3 connections now open)
m29000| Thu Jun 14 01:36:41 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.818 secs
Thu Jun 14 01:36:43 shell: started program /mnt/slaves/Linux_32bit/mongo/mongofiles --port 30999 put mongod --db unsharded
sh25708| connected to: 127.0.0.1:30999
m30999| Thu Jun 14 01:36:44 [mongosMain] connection accepted from 127.0.0.1:53389 #2 (2 connections now open)
m30000| Thu Jun 14 01:36:44 [initandlisten] connection accepted from 127.0.0.1:56509 #4 (4 connections now open)
m30001| Thu Jun 14 01:36:44 [initandlisten] connection accepted from 127.0.0.1:44413 #4 (4 connections now open)
m30002| Thu Jun 14 01:36:44 [initandlisten] connection accepted from 127.0.0.1:45507 #4 (4 connections now open)
m30000| Thu Jun 14 01:36:44 [FileAllocator] allocating new datafile /data/db/test0/unsharded.ns, filling with zeroes...
m30000| Thu Jun 14 01:36:44 [FileAllocator] creating directory /data/db/test0/_tmp
m30000| Thu Jun 14 01:36:44 [FileAllocator] done allocating datafile /data/db/test0/unsharded.ns, size: 16MB, took 0.307 secs
m30000| Thu Jun 14 01:36:44 [FileAllocator] allocating new datafile /data/db/test0/unsharded.0, filling with zeroes...
m30000| Thu Jun 14 01:36:44 [FileAllocator] done allocating datafile /data/db/test0/unsharded.0, size: 16MB, took 0.418 secs
m30000| Thu Jun 14 01:36:44 [conn4] build index unsharded.fs.files { _id: 1 }
m30000| Thu Jun 14 01:36:44 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:36:44 [conn4] info: creating collection unsharded.fs.files on add index
m30000| Thu Jun 14 01:36:44 [conn4] build index unsharded.fs.files { filename: 1 }
m30000| Thu Jun 14 01:36:44 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:36:44 [conn4] insert unsharded.system.indexes keyUpdates:0 locks(micros) w:738469 738ms
m30000| Thu Jun 14 01:36:44 [conn4] build index unsharded.fs.chunks { _id: 1 }
m30000| Thu Jun 14 01:36:44 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:36:44 [conn4] info: creating collection unsharded.fs.chunks on add index
m30000| Thu Jun 14 01:36:44 [conn4] build index unsharded.fs.chunks { files_id: 1, n: 1 }
m30000| Thu Jun 14 01:36:44 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:36:44 [FileAllocator] allocating new datafile /data/db/test0/unsharded.1, filling with zeroes...
m30000| Thu Jun 14 01:36:45 [FileAllocator] done allocating datafile /data/db/test0/unsharded.1, size: 32MB, took 0.908 secs
m30000| Thu Jun 14 01:36:45 [FileAllocator] allocating new datafile /data/db/test0/unsharded.2, filling with zeroes...
m30000| Thu Jun 14 01:36:45 [conn4] insert unsharded.fs.chunks keyUpdates:0 locks(micros) w:1604813 853ms
m30000| Thu Jun 14 01:36:47 [FileAllocator] done allocating datafile /data/db/test0/unsharded.2, size: 64MB, took 1.573 secs
m30000| Thu Jun 14 01:36:47 [conn4] insert unsharded.fs.chunks keyUpdates:0 locks(micros) w:2855103 1210ms
m30000| Thu Jun 14 01:36:47 [FileAllocator] allocating new datafile /data/db/test0/unsharded.3, filling with zeroes...
m30000| Thu Jun 14 01:36:50 [initandlisten] connection accepted from 127.0.0.1:56512 #5 (5 connections now open)
m30001| Thu Jun 14 01:36:50 [initandlisten] connection accepted from 127.0.0.1:44416 #5 (5 connections now open)
m30002| Thu Jun 14 01:36:50 [initandlisten] connection accepted from 127.0.0.1:45510 #5 (5 connections now open)
m30999| Thu Jun 14 01:36:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd978726dfcc1afddb6495e
m30999| Thu Jun 14 01:36:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
m30000| Thu Jun 14 01:36:50 [FileAllocator] done allocating datafile /data/db/test0/unsharded.3, size: 128MB, took 3.479 secs
m30000| Thu Jun 14 01:36:50 [FileAllocator] allocating new datafile /data/db/test0/unsharded.4, filling with zeroes...
m30000| Thu Jun 14 01:36:50 [conn4] insert unsharded.fs.chunks keyUpdates:0 locks(micros) w:5824259 2858ms
m30000| Thu Jun 14 01:36:51 [conn4] command unsharded.$cmd command: { filemd5: ObjectId('4fd9786cdf1ac32485b50928'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) r:267963 w:5851737 reslen:94 783ms
sh25708| added file: { _id: ObjectId('4fd9786cdf1ac32485b50928'), filename: "mongod", chunkSize: 262144, uploadDate: new Date(1339652211844), md5: "cd2eb30417f1f1fb1c666ccb462da035", length: 105292849 }
m30999| Thu Jun 14 01:36:51 [conn] end connection 127.0.0.1:53389 (1 connection now open)
sh25708| done!
fileObj: {
"_id" : ObjectId("4fd9786cdf1ac32485b50928"),
"filename" : "mongod",
"chunkSize" : 262144,
"uploadDate" : ISODate("2012-06-14T05:36:51.844Z"),
"md5" : "cd2eb30417f1f1fb1c666ccb462da035",
"length" : 105292849
}
m30000| Thu Jun 14 01:36:52 [conn3] command unsharded.$cmd command: { filemd5: ObjectId('4fd9786cdf1ac32485b50928') } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) W:148 r:54811 reslen:94 410ms
filemd5 output: { "numChunks" : 402, "md5" : "cd2eb30417f1f1fb1c666ccb462da035", "ok" : 1 }
**** sharded db, unsharded collection ****
m30999| Thu Jun 14 01:36:53 [conn] couldn't find database [sharded_db] in config db
m30999| Thu Jun 14 01:36:53 [conn] put [sharded_db] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:36:53 [conn] enabling sharding on: sharded_db
Thu Jun 14 01:36:53 shell: started program /mnt/slaves/Linux_32bit/mongo/mongofiles --port 30999 put mongod --db sharded_db
m30999| Thu Jun 14 01:36:53 [mongosMain] connection accepted from 127.0.0.1:53396 #3 (2 connections now open)
sh25725| connected to: 127.0.0.1:30999
m30001| Thu Jun 14 01:36:53 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.ns, filling with zeroes...
m30001| Thu Jun 14 01:36:53 [FileAllocator] creating directory /data/db/test1/_tmp
m30000| Thu Jun 14 01:36:56 [FileAllocator] done allocating datafile /data/db/test0/unsharded.4, size: 256MB, took 5.377 secs
m30001| Thu Jun 14 01:36:56 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.ns, size: 16MB, took 0.354 secs
m30001| Thu Jun 14 01:36:56 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.0, filling with zeroes...
m30001| Thu Jun 14 01:36:56 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.0, size: 16MB, took 0.277 secs
m30001| Thu Jun 14 01:36:56 [conn4] build index sharded_db.fs.files { _id: 1 }
m30001| Thu Jun 14 01:36:56 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:36:56 [conn4] info: creating collection sharded_db.fs.files on add index
m30001| Thu Jun 14 01:36:56 [conn4] build index sharded_db.fs.files { filename: 1 }
m30001| Thu Jun 14 01:36:56 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:36:56 [conn4] insert sharded_db.system.indexes keyUpdates:0 locks(micros) w:2968506 2968ms
m30001| Thu Jun 14 01:36:56 [conn4] build index sharded_db.fs.chunks { _id: 1 }
m30001| Thu Jun 14 01:36:56 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:36:56 [conn4] info: creating collection sharded_db.fs.chunks on add index
m30001| Thu Jun 14 01:36:56 [conn4] build index sharded_db.fs.chunks { files_id: 1, n: 1 }
m30001| Thu Jun 14 01:36:56 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:36:56 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.1, filling with zeroes...
m30001| Thu Jun 14 01:36:57 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.1, size: 32MB, took 0.577 secs
m30001| Thu Jun 14 01:36:57 [conn4] insert sharded_db.fs.chunks keyUpdates:0 locks(micros) w:3561294 578ms
m30001| Thu Jun 14 01:36:57 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.2, filling with zeroes...
m30001| Thu Jun 14 01:36:59 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.2, size: 64MB, took 1.727 secs
m30001| Thu Jun 14 01:36:59 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.3, filling with zeroes...
m30001| Thu Jun 14 01:36:59 [conn4] insert sharded_db.fs.chunks keyUpdates:0 locks(micros) w:5036820 1373ms
m30999| Thu Jun 14 01:37:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd9787c6dfcc1afddb6495f
m30999| Thu Jun 14 01:37:00 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
m30001| Thu Jun 14 01:37:02 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.3, size: 128MB, took 3.235 secs
m30001| Thu Jun 14 01:37:02 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.4, filling with zeroes...
m30001| Thu Jun 14 01:37:02 [conn4] insert sharded_db.fs.chunks keyUpdates:0 locks(micros) w:8069603 2970ms
m30001| Thu Jun 14 01:37:03 [conn4] command sharded_db.$cmd command: { filemd5: ObjectId('4fd97875edac164ef1b0157a'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) r:60723 w:8096981 reslen:94 385ms
sh25725| added file: { _id: ObjectId('4fd97875edac164ef1b0157a'), filename: "mongod", chunkSize: 262144, uploadDate: new Date(1339652223046), md5: "cd2eb30417f1f1fb1c666ccb462da035", length: 105292849 }
sh25725| done!
m30999| Thu Jun 14 01:37:03 [conn] end connection 127.0.0.1:53396 (1 connection now open)
fileObj: {
"_id" : ObjectId("4fd97875edac164ef1b0157a"),
"filename" : "mongod",
"chunkSize" : 262144,
"uploadDate" : ISODate("2012-06-14T05:37:03.046Z"),
"md5" : "cd2eb30417f1f1fb1c666ccb462da035",
"length" : 105292849
}
m30001| Thu Jun 14 01:37:03 [conn3] command sharded_db.$cmd command: { filemd5: ObjectId('4fd97875edac164ef1b0157a') } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) W:181 r:58438 reslen:94 388ms
filemd5 output: { "numChunks" : 402, "md5" : "cd2eb30417f1f1fb1c666ccb462da035", "ok" : 1 }
**** sharded collection on files_id ****
m30999| Thu Jun 14 01:37:04 [conn] couldn't find database [sharded_files_id] in config db
m30999| Thu Jun 14 01:37:04 [conn] put [sharded_files_id] on: shard0002:localhost:30002
m30999| Thu Jun 14 01:37:04 [conn] enabling sharding on: sharded_files_id
m30999| Thu Jun 14 01:37:04 [conn] CMD: shardcollection: { shardcollection: "sharded_files_id.fs.chunks", key: { files_id: 1.0 } }
m30999| Thu Jun 14 01:37:04 [conn] enable sharding on: sharded_files_id.fs.chunks with shard key: { files_id: 1.0 }
m30999| Thu Jun 14 01:37:04 [conn] going to create 1 chunk(s) for: sharded_files_id.fs.chunks using new epoch 4fd978806dfcc1afddb64960
m30002| Thu Jun 14 01:37:04 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.ns, filling with zeroes...
m30002| Thu Jun 14 01:37:04 [FileAllocator] creating directory /data/db/test2/_tmp
m30999| Thu Jun 14 01:37:04 [conn] ChunkManager: time to load chunks for sharded_files_id.fs.chunks: 0ms sequenceNumber: 2 version: 1|0||4fd978806dfcc1afddb64960 based on: (empty)
m29000| Thu Jun 14 01:37:04 [conn3] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:37:05 [conn3] build index done. scanned 0 total records. 0.363 secs
m29000| Thu Jun 14 01:37:05 [conn3] update config.collections query: { _id: "sharded_files_id.fs.chunks" } update: { _id: "sharded_files_id.fs.chunks", lastmod: new Date(1339652224), dropped: false, key: { files_id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978806dfcc1afddb64960') } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1624 w:366191 363ms
m30999| Thu Jun 14 01:37:05 [conn] resetting shard version of sharded_files_id.fs.chunks on localhost:30000, version is zero
m30999| Thu Jun 14 01:37:05 [conn] resetting shard version of sharded_files_id.fs.chunks on localhost:30001, version is zero
m30001| Thu Jun 14 01:37:08 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.4, size: 256MB, took 6.007 secs
m30002| Thu Jun 14 01:37:09 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.ns, size: 16MB, took 0.484 secs
m30002| Thu Jun 14 01:37:09 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.0, filling with zeroes...
m30002| Thu Jun 14 01:37:09 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.0, size: 16MB, took 0.379 secs
m30002| Thu Jun 14 01:37:09 [conn5] build index sharded_files_id.fs.chunks { _id: 1 }
m30002| Thu Jun 14 01:37:09 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.1, filling with zeroes...
m30002| Thu Jun 14 01:37:09 [conn5] build index done. scanned 0 total records. 0.027 secs
m30002| Thu Jun 14 01:37:09 [conn5] info: creating collection sharded_files_id.fs.chunks on add index
m30002| Thu Jun 14 01:37:09 [conn5] build index sharded_files_id.fs.chunks { files_id: 1.0 }
m30002| Thu Jun 14 01:37:09 [conn5] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:37:09 [conn5] insert sharded_files_id.system.indexes keyUpdates:0 locks(micros) W:118 r:261 w:4779366 4779ms
m30002| Thu Jun 14 01:37:09 [conn3] command admin.$cmd command: { setShardVersion: "sharded_files_id.fs.chunks", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978806dfcc1afddb64960'), serverID: ObjectId('4fd978686dfcc1afddb6495c'), shard: "shard0002", shardHost: "localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) W:66 reslen:209 4413ms
m30002| Thu Jun 14 01:37:09 [conn3] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:37:09 [initandlisten] connection accepted from 127.0.0.1:54750 #7 (7 connections now open)
m30002| Thu Jun 14 01:37:10 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.1, size: 32MB, took 0.685 secs
m30999| Thu Jun 14 01:37:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd978866dfcc1afddb64961
m30999| Thu Jun 14 01:37:10 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:37:10 [Balancer] shard0000 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:10 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:10 [Balancer] shard0002 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:10 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:37:10 [Balancer] shard0000
m30999| Thu Jun 14 01:37:10 [Balancer] shard0001
m30999| Thu Jun 14 01:37:10 [Balancer] shard0002
m30999| Thu Jun 14 01:37:10 [Balancer] { _id: "sharded_files_id.fs.chunks-files_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd978806dfcc1afddb64960'), ns: "sharded_files_id.fs.chunks", min: { files_id: MinKey }, max: { files_id: MaxKey }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:10 [Balancer] ----
m30999| Thu Jun 14 01:37:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
Thu Jun 14 01:37:10 shell: started program /mnt/slaves/Linux_32bit/mongo/mongofiles --port 30999 put mongod --db sharded_files_id
sh25735| connected to: 127.0.0.1:30999
m30999| Thu Jun 14 01:37:10 [mongosMain] connection accepted from 127.0.0.1:53398 #4 (2 connections now open)
m30002| Thu Jun 14 01:37:10 [conn4] build index sharded_files_id.fs.files { _id: 1 }
m30002| Thu Jun 14 01:37:10 [conn4] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:37:10 [conn4] info: creating collection sharded_files_id.fs.files on add index
m30002| Thu Jun 14 01:37:10 [conn4] build index sharded_files_id.fs.files { filename: 1 }
m30002| Thu Jun 14 01:37:10 [conn4] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:37:10 [conn4] build index sharded_files_id.fs.chunks { files_id: 1, n: 1 }
m30002| Thu Jun 14 01:37:10 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:37:10 [conn] resetting shard version of sharded_files_id.fs.chunks on localhost:30000, version is zero
m30999| Thu Jun 14 01:37:10 [conn] resetting shard version of sharded_files_id.fs.chunks on localhost:30001, version is zero
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.2, filling with zeroes...
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:10 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:10 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:11 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:11 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.2, size: 64MB, took 1.695 secs
m30002| Thu Jun 14 01:37:12 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:1239872 1078ms
m30002| Thu Jun 14 01:37:12 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.3, filling with zeroes...
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:12 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:12 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:1619112 328ms
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:13 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:13 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.3, size: 128MB, took 3.614 secs
m30002| Thu Jun 14 01:37:15 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:4045333 2399ms
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.4, filling with zeroes...
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:15 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:15 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:4212231 157ms
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:4363413 144ms
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:4590729 227ms
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:16 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey }
m30002| Thu Jun 14 01:37:16 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97886c589ad02ce144445') }
m30002| Thu Jun 14 01:37:17 [conn4] command sharded_files_id.$cmd command: { filemd5: ObjectId('4fd97886c589ad02ce144445'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) r:86285 w:4602614 reslen:94 419ms
sh25735| added file: { _id: ObjectId('4fd97886c589ad02ce144445'), filename: "mongod", chunkSize: 262144, uploadDate: new Date(1339652237397), md5: "cd2eb30417f1f1fb1c666ccb462da035", length: 105292849 }
sh25735| done!
m30999| Thu Jun 14 01:37:17 [conn] end connection 127.0.0.1:53398 (1 connection now open)
fileObj: {
"_id" : ObjectId("4fd97886c589ad02ce144445"),
"filename" : "mongod",
"chunkSize" : 262144,
"uploadDate" : ISODate("2012-06-14T05:37:17.397Z"),
"md5" : "cd2eb30417f1f1fb1c666ccb462da035",
"length" : 105292849
}
m30002| Thu Jun 14 01:37:17 [conn3] command sharded_files_id.$cmd command: { filemd5: ObjectId('4fd97886c589ad02ce144445') } ntoreturn:1 keyUpdates:0 numYields: 402 locks(micros) W:72 r:54804 reslen:94 402ms
filemd5 output: { "numChunks" : 402, "md5" : "cd2eb30417f1f1fb1c666ccb462da035", "ok" : 1 }
**** sharded collection on files_id,n ****
m30999| Thu Jun 14 01:37:19 [conn] couldn't find database [sharded_files_id_n] in config db
m30999| Thu Jun 14 01:37:19 [conn] put [sharded_files_id_n] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:37:19 [conn] enabling sharding on: sharded_files_id_n
m30999| Thu Jun 14 01:37:19 [conn] CMD: shardcollection: { shardcollection: "sharded_files_id_n.fs.chunks", key: { files_id: 1.0, n: 1.0 } }
m30999| Thu Jun 14 01:37:19 [conn] enable sharding on: sharded_files_id_n.fs.chunks with shard key: { files_id: 1.0, n: 1.0 }
m30999| Thu Jun 14 01:37:19 [conn] going to create 1 chunk(s) for: sharded_files_id_n.fs.chunks using new epoch 4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:19 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 3 version: 1|0||4fd9788f6dfcc1afddb64962 based on: (empty)
m30000| Thu Jun 14 01:37:19 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.ns, filling with zeroes...
m30002| Thu Jun 14 01:37:21 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.4, size: 256MB, took 5.982 secs
m30000| Thu Jun 14 01:37:21 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.ns, size: 16MB, took 1.989 secs
m30000| Thu Jun 14 01:37:21 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.0, filling with zeroes...
m30000| Thu Jun 14 01:37:22 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.0, size: 16MB, took 0.265 secs
m30000| Thu Jun 14 01:37:22 [conn5] build index sharded_files_id_n.fs.chunks { _id: 1 }
m30000| Thu Jun 14 01:37:22 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:37:22 [conn5] info: creating collection sharded_files_id_n.fs.chunks on add index
m30000| Thu Jun 14 01:37:22 [conn5] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 }
m30000| Thu Jun 14 01:37:22 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:37:22 [conn5] insert sharded_files_id_n.system.indexes keyUpdates:0 locks(micros) W:158362 r:178207 w:2561520 2561ms
m30000| Thu Jun 14 01:37:22 [conn3] command admin.$cmd command: { setShardVersion: "sharded_files_id_n.fs.chunks", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), serverID: ObjectId('4fd978686dfcc1afddb6495c'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:192 r:56841 reslen:213 2560ms
m30000| Thu Jun 14 01:37:22 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.1, filling with zeroes...
m30000| Thu Jun 14 01:37:22 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:37:22 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd978926dfcc1afddb64963
m30999| Thu Jun 14 01:37:22 [conn] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30001, version is zero
m30999| Thu Jun 14 01:37:22 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:37:22 [conn] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30002, version is zero
m30999| Thu Jun 14 01:37:22 [Balancer] shard0000 maxSize: 0 currSize: 288 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] shard0002 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:37:22 [Balancer] shard0000
m30999| Thu Jun 14 01:37:22 [Balancer] shard0001
m30999| Thu Jun 14 01:37:22 [Balancer] shard0002
m30999| Thu Jun 14 01:37:22 [Balancer] { _id: "sharded_files_id.fs.chunks-files_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd978806dfcc1afddb64960'), ns: "sharded_files_id.fs.chunks", min: { files_id: MinKey }, max: { files_id: MaxKey }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:22 [Balancer] ----
m30999| Thu Jun 14 01:37:22 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:37:22 [Balancer] shard0000 maxSize: 0 currSize: 288 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] shard0002 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:22 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:37:22 [Balancer] shard0000
m30999| Thu Jun 14 01:37:22 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: MinKey, n: MinKey }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:22 [Balancer] shard0001
m30999| Thu Jun 14 01:37:22 [Balancer] shard0002
m30999| Thu Jun 14 01:37:22 [Balancer] ----
m30999| Thu Jun 14 01:37:22 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
m29000| Thu Jun 14 01:37:22 [initandlisten] connection accepted from 127.0.0.1:54752 #8 (8 connections now open)
Thu Jun 14 01:37:22 shell: started program /mnt/slaves/Linux_32bit/mongo/mongofiles --port 30999 put mongod --db sharded_files_id_n
m30999| Thu Jun 14 01:37:22 [mongosMain] connection accepted from 127.0.0.1:53400 #5 (2 connections now open)
sh25747| connected to: 127.0.0.1:30999
m30000| Thu Jun 14 01:37:22 [conn4] build index sharded_files_id_n.fs.files { _id: 1 }
m30000| Thu Jun 14 01:37:22 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:37:22 [conn4] info: creating collection sharded_files_id_n.fs.files on add index
m30000| Thu Jun 14 01:37:22 [conn4] build index sharded_files_id_n.fs.files { filename: 1 }
m30000| Thu Jun 14 01:37:22 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:37:22 [conn] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30001, version is zero
m30999| Thu Jun 14 01:37:22 [conn] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30002, version is zero
m30000| Thu Jun 14 01:37:22 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }
m30000| Thu Jun 14 01:37:22 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }
m30000| Thu Jun 14 01:37:22 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }
m29000| Thu Jun 14 01:37:22 [initandlisten] connection accepted from 127.0.0.1:54754 #9 (9 connections now open)
m30000| Thu Jun 14 01:37:22 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: MinKey, n: MinKey }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:22 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:22 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97892349b5f269525c236
m30000| Thu Jun 14 01:37:22 [conn5] splitChunk accepted at version 1|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:22-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652242752), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: MinKey, n: MinKey }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:22 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:22 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30000:1339652242:952203482 (sleeping for 30000ms)
m30999| Thu Jun 14 01:37:22 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 4 version: 1|2||4fd9788f6dfcc1afddb64962 based on: 1|0||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:22 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { files_id: MinKey, n: MinKey } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } (splitThreshold 921)
m30000| Thu Jun 14 01:37:22 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 0 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 0 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:22 [conn5] warning: chunk is larger than 524288 bytes because of key { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }
m30000| Thu Jun 14 01:37:22 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:22 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:22 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97892349b5f269525c238
m30000| Thu Jun 14 01:37:22 [conn5] splitChunk accepted at version 1|2||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:22-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652242762), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:22 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:22 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 5 version: 1|4||4fd9788f6dfcc1afddb64962 based on: 1|2||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:22 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:37:22 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } max: { files_id: MaxKey, n: MaxKey } to: shard0001:localhost:30001
m30999| Thu Jun 14 01:37:22 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } max: { files_id: MaxKey, n: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:37:22 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:22 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:22 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97892349b5f269525c239
m30000| Thu Jun 14 01:37:22 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:22-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652242766), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:37:22 [conn5] moveChunk request accepted at version 1|4||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:22 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:37:22 [initandlisten] connection accepted from 127.0.0.1:44424 #6 (6 connections now open)
m30000| Thu Jun 14 01:37:22 [initandlisten] connection accepted from 127.0.0.1:56522 #6 (6 connections now open)
m30001| Thu Jun 14 01:37:22 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.ns, filling with zeroes...
m30000| Thu Jun 14 01:37:22 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.1, size: 32MB, took 0.755 secs
m30001| Thu Jun 14 01:37:23 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.ns, size: 16MB, took 0.479 secs
m30001| Thu Jun 14 01:37:23 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.0, filling with zeroes...
m30001| Thu Jun 14 01:37:23 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.0, size: 16MB, took 0.268 secs
m30001| Thu Jun 14 01:37:23 [migrateThread] build index sharded_files_id_n.fs.chunks { _id: 1 }
m30001| Thu Jun 14 01:37:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:37:23 [migrateThread] info: creating collection sharded_files_id_n.fs.chunks on add index
m30001| Thu Jun 14 01:37:23 [migrateThread] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 }
m30001| Thu Jun 14 01:37:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:37:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } -> { files_id: MaxKey, n: MaxKey }
m30001| Thu Jun 14 01:37:23 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.1, filling with zeroes...
m30000| Thu Jun 14 01:37:23 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:37:23 [conn5] moveChunk setting version to: 2|0||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } -> { files_id: MaxKey, n: MaxKey }
m30001| Thu Jun 14 01:37:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:23-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652243775), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 847, step2 of 5: 0, step3 of 5: 2, step4 of 5: 0, step5 of 5: 157 } }
m29000| Thu Jun 14 01:37:23 [initandlisten] connection accepted from 127.0.0.1:54757 #10 (10 connections now open)
m30000| Thu Jun 14 01:37:23 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:37:23 [conn5] moveChunk updating self version to: 2|1||4fd9788f6dfcc1afddb64962 through { files_id: MinKey, n: MinKey } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } for collection 'sharded_files_id_n.fs.chunks'
m30000| Thu Jun 14 01:37:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:23-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652243779), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:37:23 [conn5] doing delete inline
m30000| Thu Jun 14 01:37:23 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:37:23 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:23-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652243957), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 8, step6 of 6: 177 } }
m30000| Thu Jun 14 01:37:23 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:158362 r:179026 w:2738521 reslen:37 1191ms
m30999| Thu Jun 14 01:37:23 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 6 version: 2|1||4fd9788f6dfcc1afddb64962 based on: 1|4||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:23 [conn4] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:37:23 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 3 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:23 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 3 } -->> { : MaxKey, : MaxKey }
m29000| Thu Jun 14 01:37:23 [initandlisten] connection accepted from 127.0.0.1:54758 #11 (11 connections now open)
m30001| Thu Jun 14 01:37:23 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:23 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:23 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978933bf4d0a915b7ffb8
m30001| Thu Jun 14 01:37:23 [conn5] splitChunk accepted at version 2|0||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:23-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652243973), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:23 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:23 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30001:1339652243:1717811668 (sleeping for 30000ms)
m30999| Thu Jun 14 01:37:23 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 7 version: 2|3||4fd9788f6dfcc1afddb64962 based on: 2|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:23 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:23 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } max: { files_id: MaxKey, n: MaxKey } to: shard0002:localhost:30002
m30999| Thu Jun 14 01:37:23 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } max: { files_id: MaxKey, n: MaxKey }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:37:23 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_6", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:23 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:23 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978933bf4d0a915b7ffb9
m30001| Thu Jun 14 01:37:23 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:23-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652243978), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:37:23 [conn5] moveChunk request accepted at version 2|3||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:23 [conn5] moveChunk number of documents: 1
m30002| Thu Jun 14 01:37:23 [initandlisten] connection accepted from 127.0.0.1:45521 #6 (6 connections now open)
m30001| Thu Jun 14 01:37:23 [initandlisten] connection accepted from 127.0.0.1:44429 #7 (7 connections now open)
m30002| Thu Jun 14 01:37:23 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.ns, filling with zeroes...
m30001| Thu Jun 14 01:37:24 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.1, size: 32MB, took 0.682 secs
m30002| Thu Jun 14 01:37:24 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.ns, size: 16MB, took 0.712 secs
m30002| Thu Jun 14 01:37:24 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.0, filling with zeroes...
m30001| Thu Jun 14 01:37:24 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30002| Thu Jun 14 01:37:25 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.0, size: 16MB, took 0.349 secs
m30002| Thu Jun 14 01:37:25 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.1, filling with zeroes...
m30002| Thu Jun 14 01:37:25 [migrateThread] build index sharded_files_id_n.fs.chunks { _id: 1 }
m30002| Thu Jun 14 01:37:25 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:37:25 [migrateThread] info: creating collection sharded_files_id_n.fs.chunks on add index
m30002| Thu Jun 14 01:37:25 [migrateThread] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 }
m30002| Thu Jun 14 01:37:25 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:37:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } -> { files_id: MaxKey, n: MaxKey }
m30002| Thu Jun 14 01:37:25 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.1, size: 32MB, took 0.73 secs
m30001| Thu Jun 14 01:37:25 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:37:25 [conn5] moveChunk setting version to: 3|0||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } -> { files_id: MaxKey, n: MaxKey }
m30002| Thu Jun 14 01:37:25 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:25-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652245995), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 1072, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 934 } }
m30001| Thu Jun 14 01:37:25 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:37:25 [conn5] moveChunk updating self version to: 3|1||4fd9788f6dfcc1afddb64962 through { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } for collection 'sharded_files_id_n.fs.chunks'
m30001| Thu Jun 14 01:37:25 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:25-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652245999), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:37:25 [conn5] doing delete inline
m30001| Thu Jun 14 01:37:26 [conn5] moveChunk deleted: 1
m30001| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652246001), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2008, step5 of 6: 12, step6 of 6: 1 } }
m30001| Thu Jun 14 01:37:26 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_6", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:207042 w:961 reslen:37 2024ms
m30002| Thu Jun 14 01:37:26 [conn4] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 8 version: 3|1||4fd9788f6dfcc1afddb64962 based on: 2|3||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 6 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 6 } -->> { : MaxKey, : MaxKey }
m29000| Thu Jun 14 01:37:26 [initandlisten] connection accepted from 127.0.0.1:54761 #12 (12 connections now open)
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_6", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30002:1339652246:784930048 (sleeping for 30000ms)
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcbd
m29000| Thu Jun 14 01:37:26 [initandlisten] connection accepted from 127.0.0.1:54762 #13 (13 connections now open)
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|0||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246022), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 9 version: 3|3||4fd9788f6dfcc1afddb64962 based on: 3|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 9 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 9 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 9 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 9 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_9", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcc1
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|3||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246039), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 10 version: 3|5||4fd9788f6dfcc1afddb64962 based on: 3|3||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 12 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 12 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 12 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 12 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_12", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcc5
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|5||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246056), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 11 version: 3|7||4fd9788f6dfcc1afddb64962 based on: 3|5||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|5||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 15 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 15 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 15 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 15 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_15", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcc9
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|7||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246073), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 12 version: 3|9||4fd9788f6dfcc1afddb64962 based on: 3|7||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|7||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 18 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 18 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 18 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 18 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_18", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dccd
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|9||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246089), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 13 version: 3|11||4fd9788f6dfcc1afddb64962 based on: 3|9||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|9||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 21 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 21 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 21 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 21 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_21", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcd1
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|11||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246105), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, lastmod: Timestamp 3000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|13, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 14 version: 3|13||4fd9788f6dfcc1afddb64962 based on: 3|11||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|11||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 24 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 24 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 24 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 24 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_24", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcd5
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|13||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246121), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, lastmod: Timestamp 3000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 15 version: 3|15||4fd9788f6dfcc1afddb64962 based on: 3|13||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|13||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 27 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 27 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 27 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 27 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_27", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcd9
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|15||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246137), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, lastmod: Timestamp 3000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|17, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 16 version: 3|17||4fd9788f6dfcc1afddb64962 based on: 3|15||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|15||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 30 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 30 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 30 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 30 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_30", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcdd
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|17||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246156), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, lastmod: Timestamp 3000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|19, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 17 version: 3|19||4fd9788f6dfcc1afddb64962 based on: 3|17||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|17||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 33 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 33 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 33 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 33 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_33", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dce1
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|19||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246172), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, lastmod: Timestamp 3000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|21, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 18 version: 3|21||4fd9788f6dfcc1afddb64962 based on: 3|19||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|19||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 36 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 36 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 36 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 36 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_36", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dce5
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|21||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246188), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, lastmod: Timestamp 3000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|23, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 19 version: 3|23||4fd9788f6dfcc1afddb64962 based on: 3|21||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|21||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 39 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 39 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 20 version: 3|25||4fd9788f6dfcc1afddb64962 based on: 3|23||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|23||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 39 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 39 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_39", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dce9
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|23||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246204), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, lastmod: Timestamp 3000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|25, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:26 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.2, filling with zeroes...
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 42 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 42 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 42 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 42 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_42", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dced
m30002| Thu Jun 14 01:37:26 [conn5] splitChunk accepted at version 3|25||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246273), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, lastmod: Timestamp 3000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|27, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:26 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 21 version: 3|27||4fd9788f6dfcc1afddb64962 based on: 3|25||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:26 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:26 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|27||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } max: { files_id: MaxKey, n: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:37:26 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|27||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } max: { files_id: MaxKey, n: MaxKey }) shard0002:localhost:30002 -> shard0000:localhost:30000
m30002| Thu Jun 14 01:37:26 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_45", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd97896af0847faaef2dcee
m30002| Thu Jun 14 01:37:26 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:26-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652246293), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", to: "shard0000" } }
m30002| Thu Jun 14 01:37:26 [conn5] moveChunk request accepted at version 3|27||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:26 [conn5] moveChunk number of documents: 1
m30000| Thu Jun 14 01:37:26 [initandlisten] connection accepted from 127.0.0.1:56529 #7 (7 connections now open)
m30002| Thu Jun 14 01:37:26 [initandlisten] connection accepted from 127.0.0.1:45526 #7 (7 connections now open)
m30000| Thu Jun 14 01:37:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } -> { files_id: MaxKey, n: MaxKey }
m30002| Thu Jun 14 01:37:27 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30002| Thu Jun 14 01:37:27 [conn5] moveChunk setting version to: 4|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } -> { files_id: MaxKey, n: MaxKey }
m30000| Thu Jun 14 01:37:27 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:27-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652247303), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 22, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 977 } }
m30002| Thu Jun 14 01:37:27 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30002| Thu Jun 14 01:37:27 [conn5] moveChunk updating self version to: 4|1||4fd9788f6dfcc1afddb64962 through { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 } for collection 'sharded_files_id_n.fs.chunks'
m30002| Thu Jun 14 01:37:27 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:27-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652247307), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", to: "shard0000" } }
m30002| Thu Jun 14 01:37:27 [conn5] doing delete inline
m30002| Thu Jun 14 01:37:28 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.2, size: 64MB, took 1.818 secs
m30002| Thu Jun 14 01:37:28 [conn5] moveChunk deleted: 1
m30002| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652248030), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 722 } }
m30002| Thu Jun 14 01:37:28 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_45", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:118 r:401831 w:5501832 reslen:37 1738ms
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 22 version: 4|1||4fd9788f6dfcc1afddb64962 based on: 3|27||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 45 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 45 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 45 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 45 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_45", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c23d
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248050), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 23 version: 4|3||4fd9788f6dfcc1afddb64962 based on: 4|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 48 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 48 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 48 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 48 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_48", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c241
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|3||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248070), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 24 version: 4|5||4fd9788f6dfcc1afddb64962 based on: 4|3||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 51 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 51 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 51 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 51 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_51", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c245
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|5||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248096), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 25 version: 4|7||4fd9788f6dfcc1afddb64962 based on: 4|5||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|5||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 54 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 54 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 54 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 54 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_54", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c249
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|7||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248116), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 26 version: 4|9||4fd9788f6dfcc1afddb64962 based on: 4|7||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|7||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 57 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 57 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 57 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 57 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_57", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c24d
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|9||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248140), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 27 version: 4|11||4fd9788f6dfcc1afddb64962 based on: 4|9||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|9||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 60 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 60 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 60 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 60 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_60", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c251
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|11||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248161), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 28 version: 4|13||4fd9788f6dfcc1afddb64962 based on: 4|11||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|11||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 63 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 63 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 63 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 63 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_63", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c255
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|13||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248185), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 29 version: 4|15||4fd9788f6dfcc1afddb64962 based on: 4|13||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|13||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 66 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 66 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 66 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 66 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_66", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c259
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|15||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248205), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 30 version: 4|17||4fd9788f6dfcc1afddb64962 based on: 4|15||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|15||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 69 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 69 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 69 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 69 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_69", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c25d
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|17||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248229), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|19, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 31 version: 4|19||4fd9788f6dfcc1afddb64962 based on: 4|17||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|17||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 72 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 72 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 72 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 72 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_72", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c261
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|19||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248249), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 32 version: 4|21||4fd9788f6dfcc1afddb64962 based on: 4|19||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|19||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 75 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 75 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 75 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 75 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_75", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c265
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|21||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248273), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|23, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 33 version: 4|23||4fd9788f6dfcc1afddb64962 based on: 4|21||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|21||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:28 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 34 version: 4|25||4fd9788f6dfcc1afddb64962 based on: 4|23||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:28 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|23||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:28 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } max: { files_id: MaxKey, n: MaxKey } to: shard0001:localhost:30001
m30999| Thu Jun 14 01:37:28 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 4|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } max: { files_id: MaxKey, n: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:37:28 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.2, filling with zeroes...
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 78 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 78 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 78 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 78 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:28 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_78", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c269
m30000| Thu Jun 14 01:37:28 [conn5] splitChunk accepted at version 4|23||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248295), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:28 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_81", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd97898349b5f269525c26a
m30000| Thu Jun 14 01:37:28 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:28-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652248300), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:37:28 [conn5] moveChunk request accepted at version 4|25||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:28 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:37:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } -> { files_id: MaxKey, n: MaxKey }
m30000| Thu Jun 14 01:37:29 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:37:29 [conn5] moveChunk setting version to: 5|0||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } -> { files_id: MaxKey, n: MaxKey }
m30001| Thu Jun 14 01:37:29 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652249307), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 3, step4 of 5: 0, step5 of 5: 1001 } }
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 35 version: 5|1||4fd9788f6dfcc1afddb64962 based on: 4|25||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:29 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:37:29 [conn5] moveChunk updating self version to: 5|1||4fd9788f6dfcc1afddb64962 through { files_id: MinKey, n: MinKey } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } for collection 'sharded_files_id_n.fs.chunks'
m30000| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652249311), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:37:29 [conn5] doing delete inline
m30000| Thu Jun 14 01:37:29 [conn5] moveChunk deleted: 1
m30000| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652249313), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 1 } }
m30000| Thu Jun 14 01:37:29 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_81", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:158362 r:184925 w:2739541 reslen:37 1013ms
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 81 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 81 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 81 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 81 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_81", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffbd
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|0||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249327), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 36 version: 5|3||4fd9788f6dfcc1afddb64962 based on: 5|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 84 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 84 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 84 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 84 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_84", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffc1
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|3||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249351), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, lastmod: Timestamp 5000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|5, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 37 version: 5|5||4fd9788f6dfcc1afddb64962 based on: 5|3||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 87 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 87 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 87 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 87 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_87", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffc5
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|5||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249371), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, lastmod: Timestamp 5000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|7, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 38 version: 5|7||4fd9788f6dfcc1afddb64962 based on: 5|5||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|5||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 90 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 90 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 90 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 90 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_90", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffc9
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|7||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249391), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, lastmod: Timestamp 5000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m29000| Thu Jun 14 01:37:29 [conn10] insert config.changelog keyUpdates:0 locks(micros) r:2544 w:350317 349ms
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 39 version: 5|9||4fd9788f6dfcc1afddb64962 based on: 5|7||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|7||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 93 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 93 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 93 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 93 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_93", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffcd
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|9||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249760), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, lastmod: Timestamp 5000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|11, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 40 version: 5|11||4fd9788f6dfcc1afddb64962 based on: 5|9||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|9||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 96 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 96 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 96 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 96 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_96", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffd1
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|11||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249780), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, lastmod: Timestamp 5000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|13, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 41 version: 5|13||4fd9788f6dfcc1afddb64962 based on: 5|11||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|11||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 99 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 99 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 99 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 99 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_99", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffd5
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|13||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:29 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.2, size: 64MB, took 1.66 secs
m29000| Thu Jun 14 01:37:29 [conn11] command config.$cmd command: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_99", lastmod: Timestamp 5000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, shard: "shard0001" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_99" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_102", lastmod: Timestamp 5000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0001" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_102" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "sharded_files_id_n.fs.chunks" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 5000|13 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:151556 r:1622 w:3575 reslen:72 148ms
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249951), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, lastmod: Timestamp 5000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:29 [conn5] command admin.$cmd command: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_99", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:210677 w:961 reslen:119 151ms
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 42 version: 5|15||4fd9788f6dfcc1afddb64962 based on: 5|13||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|13||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 102 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 102 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 102 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 102 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_102", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffd9
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|15||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249968), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, lastmod: Timestamp 5000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|17, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 0ms sequenceNumber: 43 version: 5|17||4fd9788f6dfcc1afddb64962 based on: 5|15||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|15||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 105 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 105 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 105 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 105 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_105", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd978993bf4d0a915b7ffdd
m30001| Thu Jun 14 01:37:29 [conn5] splitChunk accepted at version 5|17||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:29 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:29-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652249985), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, lastmod: Timestamp 5000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|19, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:29 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:29 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 44 version: 5|19||4fd9788f6dfcc1afddb64962 based on: 5|17||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:29 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|17||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 108 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 108 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 108 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:29 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 108 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_108", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7ffe1
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|19||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250002), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, lastmod: Timestamp 5000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|21, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 45 version: 5|21||4fd9788f6dfcc1afddb64962 based on: 5|19||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|19||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 111 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 111 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 111 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 111 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_111", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7ffe5
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|21||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250043), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, lastmod: Timestamp 5000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|23, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 46 version: 5|23||4fd9788f6dfcc1afddb64962 based on: 5|21||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|21||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 114 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.2, filling with zeroes...
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 114 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 114 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 114 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_114", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7ffe9
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|23||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250060), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, lastmod: Timestamp 5000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|25, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 117 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 117 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 117 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 117 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_117", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7ffed
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|25||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250124), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, lastmod: Timestamp 5000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|27, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 120 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 120 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 120 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 120 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_120", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7fff1
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|27||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250182), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, lastmod: Timestamp 5000|28, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|29, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 123 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 123 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 123 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 123 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_123", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7fff5
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|29||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250214), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, lastmod: Timestamp 5000|30, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|31, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 126 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 47 version: 5|25||4fd9788f6dfcc1afddb64962 based on: 5|23||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|23||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 48 version: 5|27||4fd9788f6dfcc1afddb64962 based on: 5|25||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 49 version: 5|29||4fd9788f6dfcc1afddb64962 based on: 5|27||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|27||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 50 version: 5|31||4fd9788f6dfcc1afddb64962 based on: 5|29||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|29||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 126 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 126 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 126 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_126", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7fff9
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|31||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250236), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, lastmod: Timestamp 5000|32, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|33, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 51 version: 5|33||4fd9788f6dfcc1afddb64962 based on: 5|31||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|31||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 129 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 129 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 129 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 129 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_129", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b7fffd
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|33||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250260), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|33, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, lastmod: Timestamp 5000|34, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|35, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 52 version: 5|35||4fd9788f6dfcc1afddb64962 based on: 5|33||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|33||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 132 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 132 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 132 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 132 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_132", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80001
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|35||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250281), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|35, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, lastmod: Timestamp 5000|36, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|37, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 53 version: 5|37||4fd9788f6dfcc1afddb64962 based on: 5|35||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|35||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 135 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 135 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 135 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 135 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_135", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80005
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|37||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250312), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|37, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, lastmod: Timestamp 5000|38, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|39, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 54 version: 5|39||4fd9788f6dfcc1afddb64962 based on: 5|37||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|37||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 138 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 138 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 138 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 138 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_138", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80009
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|39||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250333), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, lastmod: Timestamp 5000|40, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|41, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 55 version: 5|41||4fd9788f6dfcc1afddb64962 based on: 5|39||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|39||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 141 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 141 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 141 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 141 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_141", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8000d
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|41||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250357), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|41, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, lastmod: Timestamp 5000|42, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|43, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 56 version: 5|43||4fd9788f6dfcc1afddb64962 based on: 5|41||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|41||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 144 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 144 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 144 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 144 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_144", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80011
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|43||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250407), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|43, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, lastmod: Timestamp 5000|44, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|45, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 147 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 147 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 147 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 147 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_147", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80015
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|45||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250436), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|45, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, lastmod: Timestamp 5000|46, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|47, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 150 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 150 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 150 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 150 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_150", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80019
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|47||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250463), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|47, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, lastmod: Timestamp 5000|48, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|49, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 57 version: 5|45||4fd9788f6dfcc1afddb64962 based on: 5|43||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|43||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 58 version: 5|47||4fd9788f6dfcc1afddb64962 based on: 5|45||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|45||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 59 version: 5|49||4fd9788f6dfcc1afddb64962 based on: 5|47||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|47||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 153 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 153 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 153 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 153 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_153", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8001d
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|49||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250495), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|49, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, lastmod: Timestamp 5000|50, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|51, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 156 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 156 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 156 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 156 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_156", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80021
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|51||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250517), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|51, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, lastmod: Timestamp 5000|52, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|53, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 159 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 159 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 60 version: 5|51||4fd9788f6dfcc1afddb64962 based on: 5|49||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|49||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 61 version: 5|53||4fd9788f6dfcc1afddb64962 based on: 5|51||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|51||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 159 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 159 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_159", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80025
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|53||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250547), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|53, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, lastmod: Timestamp 5000|54, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|55, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 62 version: 5|55||4fd9788f6dfcc1afddb64962 based on: 5|53||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|53||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 162 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 162 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 162 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 162 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_162", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80029
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|55||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250567), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|55, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, lastmod: Timestamp 5000|56, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|57, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 63 version: 5|57||4fd9788f6dfcc1afddb64962 based on: 5|55||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|55||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 165 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 165 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 165 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 165 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_165", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8002d
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|57||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250591), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|57, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, lastmod: Timestamp 5000|58, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|59, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 64 version: 5|59||4fd9788f6dfcc1afddb64962 based on: 5|57||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|57||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 168 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 168 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 168 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 168 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_168", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80031
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|59||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250648), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|59, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, lastmod: Timestamp 5000|60, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|61, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 65 version: 5|61||4fd9788f6dfcc1afddb64962 based on: 5|59||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|59||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 171 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 171 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 171 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 171 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_171", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80035
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|61||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250669), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|61, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, lastmod: Timestamp 5000|62, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|63, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 66 version: 5|63||4fd9788f6dfcc1afddb64962 based on: 5|61||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|61||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 174 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 174 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 174 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 174 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_174", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80039
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|63||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250727), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|63, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, lastmod: Timestamp 5000|64, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|65, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 67 version: 5|65||4fd9788f6dfcc1afddb64962 based on: 5|63||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|63||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 177 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 177 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 177 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 177 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_177", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8003d
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|65||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250752), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|65, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, lastmod: Timestamp 5000|66, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|67, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 68 version: 5|67||4fd9788f6dfcc1afddb64962 based on: 5|65||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|65||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 180 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 180 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 180 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 180 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_180", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80041
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|67||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250775), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|67, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, lastmod: Timestamp 5000|68, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|69, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 69 version: 5|69||4fd9788f6dfcc1afddb64962 based on: 5|67||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|67||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 183 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 183 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 183 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 183 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_183", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80045
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|69||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250798), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|69, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, lastmod: Timestamp 5000|70, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|71, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 70 version: 5|71||4fd9788f6dfcc1afddb64962 based on: 5|69||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|69||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 186 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 186 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 186 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 186 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_186", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80049
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|71||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250854), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|71, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, lastmod: Timestamp 5000|72, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|73, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 71 version: 5|73||4fd9788f6dfcc1afddb64962 based on: 5|71||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|71||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 189 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 189 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 189 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 189 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_189", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8004d
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|73||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250877), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|73, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, lastmod: Timestamp 5000|74, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|75, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 72 version: 5|75||4fd9788f6dfcc1afddb64962 based on: 5|73||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|73||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 192 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 192 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 192 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 192 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_192", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80051
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|75||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250898), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|75, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, lastmod: Timestamp 5000|76, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|77, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 73 version: 5|77||4fd9788f6dfcc1afddb64962 based on: 5|75||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|75||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 195 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 195 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 195 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 195 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_195", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80055
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|77||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250923), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|77, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, lastmod: Timestamp 5000|78, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|79, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 198 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 198 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 198 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 198 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_198", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b80059
m30001| Thu Jun 14 01:37:30 [conn5] splitChunk accepted at version 5|79||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:30-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652250939), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|79, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, lastmod: Timestamp 5000|80, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|81, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 201 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 201 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 201 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 201 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_201", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789a3bf4d0a915b8005d
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 74 version: 5|79||4fd9788f6dfcc1afddb64962 based on: 5|77||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|77||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:30 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 75 version: 5|81||4fd9788f6dfcc1afddb64962 based on: 5|79||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:30 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|79||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|81||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251015), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|81, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, lastmod: Timestamp 5000|82, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|83, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 76 version: 5|83||4fd9788f6dfcc1afddb64962 based on: 5|81||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|81||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 204 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 204 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 204 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 204 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_204", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80061
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|83||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251033), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|83, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, lastmod: Timestamp 5000|84, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|85, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 77 version: 5|85||4fd9788f6dfcc1afddb64962 based on: 5|83||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|83||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 207 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 207 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 207 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 207 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_207", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80065
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|85||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251050), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|85, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, lastmod: Timestamp 5000|86, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|87, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 78 version: 5|87||4fd9788f6dfcc1afddb64962 based on: 5|85||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|85||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 210 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 210 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 210 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 210 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_210", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80069
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|87||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251068), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|87, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, lastmod: Timestamp 5000|88, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|89, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 79 version: 5|89||4fd9788f6dfcc1afddb64962 based on: 5|87||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|87||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 213 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 213 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 213 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 213 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_213", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b8006d
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|89||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251119), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|89, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, lastmod: Timestamp 5000|90, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|91, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 80 version: 5|91||4fd9788f6dfcc1afddb64962 based on: 5|89||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|89||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 216 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 216 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 216 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 216 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_216", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80071
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|91||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251272), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|91, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, lastmod: Timestamp 5000|92, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|93, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:31 [conn5] command admin.$cmd command: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_216", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:226086 w:961 reslen:119 137ms
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 81 version: 5|93||4fd9788f6dfcc1afddb64962 based on: 5|91||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|91||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 219 } -->> { : MaxKey, : MaxKey }
m29000| Thu Jun 14 01:37:31 [conn11] command config.$cmd command: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_216", lastmod: Timestamp 5000|92, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, shard: "shard0001" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_216" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_219", lastmod: Timestamp 5000|93, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0001" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_219" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "sharded_files_id_n.fs.chunks" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 5000|91 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:300341 r:8201 w:18089 reslen:72 135ms
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 219 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 219 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 219 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_219", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80075
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|93||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251288), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|93, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, lastmod: Timestamp 5000|94, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|95, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 82 version: 5|95||4fd9788f6dfcc1afddb64962 based on: 5|93||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|93||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 222 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 222 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 222 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 222 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_222", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80079
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|95||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251305), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|95, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, lastmod: Timestamp 5000|96, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|97, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 83 version: 5|97||4fd9788f6dfcc1afddb64962 based on: 5|95||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|95||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 225 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 225 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 225 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 225 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_225", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b8007d
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|97||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251321), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|97, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, lastmod: Timestamp 5000|98, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|99, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 84 version: 5|99||4fd9788f6dfcc1afddb64962 based on: 5|97||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|97||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 228 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 228 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 228 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 228 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_228", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80081
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|99||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251338), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|99, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, lastmod: Timestamp 5000|100, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|101, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 85 version: 5|101||4fd9788f6dfcc1afddb64962 based on: 5|99||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|99||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 231 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 231 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 231 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 231 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_231", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80085
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|101||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251355), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|101, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, lastmod: Timestamp 5000|102, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|103, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 86 version: 5|103||4fd9788f6dfcc1afddb64962 based on: 5|101||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|101||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 } (splitThreshold 943718) (migrate suggested)
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 234 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.2, size: 64MB, took 1.827 secs
m30001| Thu Jun 14 01:37:31 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) W:51 r:93449 w:8677922 521ms
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 234 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 234 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 234 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:31 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_234", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b80089
m30001| Thu Jun 14 01:37:31 [conn5] splitChunk accepted at version 5|103||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251892), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|103, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, lastmod: Timestamp 5000|104, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 5000|105, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30999| Thu Jun 14 01:37:31 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 87 version: 5|105||4fd9788f6dfcc1afddb64962 based on: 5|103||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:31 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|103||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:31 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|105||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } max: { files_id: MaxKey, n: MaxKey } to: shard0000:localhost:30000
m30999| Thu Jun 14 01:37:31 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 5|105||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } max: { files_id: MaxKey, n: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:37:31 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_237", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:31 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:37:31 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' acquired, ts : 4fd9789b3bf4d0a915b8008a
m30001| Thu Jun 14 01:37:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:31-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652251897), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:37:31 [conn5] moveChunk request accepted at version 5|105||4fd9788f6dfcc1afddb64962
m30001| Thu Jun 14 01:37:31 [conn5] moveChunk number of documents: 1
m30001| Thu Jun 14 01:37:31 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.3, filling with zeroes...
m30000| Thu Jun 14 01:37:31 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } -> { files_id: MaxKey, n: MaxKey }
m30001| Thu Jun 14 01:37:32 [initandlisten] connection accepted from 127.0.0.1:44434 #8 (8 connections now open)
m30999| Thu Jun 14 01:37:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' acquired, ts : 4fd9789c6dfcc1afddb64964
m30999| Thu Jun 14 01:37:32 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:37:32 [Balancer] shard0000 maxSize: 0 currSize: 320 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] shard0001 maxSize: 0 currSize: 384 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] shard0002 maxSize: 0 currSize: 320 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:37:32 [Balancer] shard0000
m30999| Thu Jun 14 01:37:32 [Balancer] shard0001
m30999| Thu Jun 14 01:37:32 [Balancer] shard0002
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id.fs.chunks-files_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd978806dfcc1afddb64960'), ns: "sharded_files_id.fs.chunks", min: { files_id: MinKey }, max: { files_id: MaxKey }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] ----
m30999| Thu Jun 14 01:37:32 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:37:32 [Balancer] shard0000 maxSize: 0 currSize: 320 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] shard0001 maxSize: 0 currSize: 384 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] shard0002 maxSize: 0 currSize: 320 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:37:32 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:37:32 [Balancer] shard0000
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_45", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_48", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 48 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_51", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 51 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_54", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 54 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_57", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 57 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_60", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 60 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_63", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 63 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_66", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 66 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_69", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 69 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_72", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 72 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_75", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 75 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_78", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 78 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, shard: "shard0000" }
m30999| Thu Jun 14 01:37:32 [Balancer] shard0001
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_81", lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 81 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_84", lastmod: Timestamp 5000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 84 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_87", lastmod: Timestamp 5000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 87 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_90", lastmod: Timestamp 5000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 90 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_93", lastmod: Timestamp 5000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 93 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_96", lastmod: Timestamp 5000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 96 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_99", lastmod: Timestamp 5000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 99 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_102", lastmod: Timestamp 5000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 102 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_105", lastmod: Timestamp 5000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 105 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_108", lastmod: Timestamp 5000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 108 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_111", lastmod: Timestamp 5000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 111 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_114", lastmod: Timestamp 5000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 114 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_117", lastmod: Timestamp 5000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 117 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_120", lastmod: Timestamp 5000|28, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 120 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_123", lastmod: Timestamp 5000|30, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 123 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_126", lastmod: Timestamp 5000|32, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 126 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_129", lastmod: Timestamp 5000|34, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 129 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_132", lastmod: Timestamp 5000|36, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 132 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_135", lastmod: Timestamp 5000|38, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 135 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_138", lastmod: Timestamp 5000|40, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 138 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_141", lastmod: Timestamp 5000|42, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 141 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_144", lastmod: Timestamp 5000|44, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 144 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_147", lastmod: Timestamp 5000|46, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 147 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_150", lastmod: Timestamp 5000|48, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 150 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_153", lastmod: Timestamp 5000|50, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 153 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_156", lastmod: Timestamp 5000|52, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 156 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_159", lastmod: Timestamp 5000|54, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 159 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_162", lastmod: Timestamp 5000|56, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 162 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_165", lastmod: Timestamp 5000|58, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 165 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_168", lastmod: Timestamp 5000|60, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 168 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_171", lastmod: Timestamp 5000|62, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 171 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_174", lastmod: Timestamp 5000|64, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 174 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_177", lastmod: Timestamp 5000|66, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 177 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_180", lastmod: Timestamp 5000|68, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 180 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_183", lastmod: Timestamp 5000|70, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 183 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_186", lastmod: Timestamp 5000|72, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 186 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_189", lastmod: Timestamp 5000|74, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 189 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_192", lastmod: Timestamp 5000|76, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 192 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_195", lastmod: Timestamp 5000|78, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 195 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_198", lastmod: Timestamp 5000|80, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 198 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_201", lastmod: Timestamp 5000|82, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 201 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_204", lastmod: Timestamp 5000|84, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 204 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_207", lastmod: Timestamp 5000|86, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 207 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_210", lastmod: Timestamp 5000|88, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 210 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_213", lastmod: Timestamp 5000|90, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 213 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_216", lastmod: Timestamp 5000|92, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 216 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_219", lastmod: Timestamp 5000|94, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 219 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_222", lastmod: Timestamp 5000|96, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 222 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_225", lastmod: Timestamp 5000|98, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 225 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_228", lastmod: Timestamp 5000|100, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 228 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_231", lastmod: Timestamp 5000|102, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 231 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_234", lastmod: Timestamp 5000|104, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 234 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_237", lastmod: Timestamp 5000|105, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] shard0002
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_6", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_9", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 9 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_12", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 12 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_15", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 15 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_18", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 18 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_21", lastmod: Timestamp 3000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 21 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_24", lastmod: Timestamp 3000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 24 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_27", lastmod: Timestamp 3000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 27 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_30", lastmod: Timestamp 3000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 30 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_33", lastmod: Timestamp 3000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 33 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_36", lastmod: Timestamp 3000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 36 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_39", lastmod: Timestamp 3000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 39 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_42", lastmod: Timestamp 3000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 42 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 45 }, shard: "shard0002" }
m30999| Thu Jun 14 01:37:32 [Balancer] ----
m30999| Thu Jun 14 01:37:32 [Balancer] chose [shard0001] to [shard0002] { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, shard: "shard0001" }
m30999| Thu Jun 14 01:37:32 [Balancer] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 3|1||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30999| Thu Jun 14 01:37:32 [Balancer] moveChunk result: { errmsg: "migration already in progress", ok: 0.0 }
m30999| Thu Jun 14 01:37:32 [Balancer] balancer move failed: { errmsg: "migration already in progress", ok: 0.0 } from: shard0001 to: shard0002 chunk: min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }
m30999| Thu Jun 14 01:37:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652200:1804289383' unlocked.
m30001| Thu Jun 14 01:37:32 [conn8] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_3", configdb: "localhost:29000" }
m30001| Thu Jun 14 01:37:32 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44434", time: new Date(1339652252205), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 }, step1 of 6: 0, note: "aborted" } }
m30001| Thu Jun 14 01:37:32 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:37:32 [conn5] moveChunk setting version to: 6|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:32 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } -> { files_id: MaxKey, n: MaxKey }
m30000| Thu Jun 14 01:37:32 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-21", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652252907), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 2, step4 of 5: 0, step5 of 5: 1006 } }
m30001| Thu Jun 14 01:37:32 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:37:32 [conn5] moveChunk updating self version to: 6|1||4fd9788f6dfcc1afddb64962 through { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 3 } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 6 } for collection 'sharded_files_id_n.fs.chunks'
m30001| Thu Jun 14 01:37:32 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-60", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652252912), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:37:32 [conn5] doing delete inline
m30001| Thu Jun 14 01:37:32 [conn5] moveChunk deleted: 1
m30001| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30001:1339652243:1717811668' unlocked.
m30001| Thu Jun 14 01:37:32 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-61", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44416", time: new Date(1339652252913), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:37:32 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_237", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:228366 w:1898 reslen:37 1017ms
m30999| Thu Jun 14 01:37:32 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 88 version: 6|1||4fd9788f6dfcc1afddb64962 based on: 5|105||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 237 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 237 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 237 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 237 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_237", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789c349b5f269525c26e
m30000| Thu Jun 14 01:37:32 [conn5] splitChunk accepted at version 6|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:32 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652252931), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:32 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 89 version: 6|3||4fd9788f6dfcc1afddb64962 based on: 6|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:32 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 237 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 240 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 240 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 240 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 240 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_240", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789c349b5f269525c272
m30000| Thu Jun 14 01:37:32 [conn5] splitChunk accepted at version 6|3||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:32 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652252949), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 }, lastmod: Timestamp 6000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|5, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:32 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 90 version: 6|5||4fd9788f6dfcc1afddb64962 based on: 6|3||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:32 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 240 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 243 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 243 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 243 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 243 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_243", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789c349b5f269525c276
m30000| Thu Jun 14 01:37:32 [conn5] splitChunk accepted at version 6|5||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:32 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:32-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652252969), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, lastmod: Timestamp 6000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|7, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:32 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 91 version: 6|7||4fd9788f6dfcc1afddb64962 based on: 6|5||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:32 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|5||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 243 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 246 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 246 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 246 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 246 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_246", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789c349b5f269525c27a
m30000| Thu Jun 14 01:37:32 [conn5] splitChunk accepted at version 6|7||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253147), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, lastmod: Timestamp 6000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m29000| Thu Jun 14 01:37:33 [conn9] command config.$cmd command: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_246", lastmod: Timestamp 6000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, shard: "shard0000" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_246" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_249", lastmod: Timestamp 6000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0000" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_249" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "sharded_files_id_n.fs.chunks" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 6000|7 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:167832 r:3525 w:7550 reslen:72 160ms
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:33 [conn5] command admin.$cmd command: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_246", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:158362 r:189721 w:2739541 reslen:119 163ms
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 92 version: 6|9||4fd9788f6dfcc1afddb64962 based on: 6|7||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|7||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 246 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 249 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 249 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 249 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 249 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_249", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c27e
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|9||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253165), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 }, lastmod: Timestamp 6000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|11, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 93 version: 6|11||4fd9788f6dfcc1afddb64962 based on: 6|9||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|9||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 249 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 252 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 252 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 252 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 252 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_252", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c282
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|11||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253183), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 }, lastmod: Timestamp 6000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|13, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 94 version: 6|13||4fd9788f6dfcc1afddb64962 based on: 6|11||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|11||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 252 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 255 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 255 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 255 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 255 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_255", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c286
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|13||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253201), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 }, lastmod: Timestamp 6000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 95 version: 6|15||4fd9788f6dfcc1afddb64962 based on: 6|13||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|13||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 255 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 258 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 258 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 258 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 258 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_258", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c28a
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|15||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253219), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 }, lastmod: Timestamp 6000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|17, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 96 version: 6|17||4fd9788f6dfcc1afddb64962 based on: 6|15||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|15||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 258 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 261 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 261 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 261 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 261 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_261", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c28e
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|17||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253237), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 }, lastmod: Timestamp 6000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|19, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 4ms sequenceNumber: 97 version: 6|19||4fd9788f6dfcc1afddb64962 based on: 6|17||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|17||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 261 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 264 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 264 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 264 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 264 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_264", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c292
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|19||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253258), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 }, lastmod: Timestamp 6000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|21, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 267 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 98 version: 6|21||4fd9788f6dfcc1afddb64962 based on: 6|19||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|19||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 264 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 267 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 267 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 267 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_267", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c296
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|21||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253278), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 }, lastmod: Timestamp 6000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|23, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 99 version: 6|23||4fd9788f6dfcc1afddb64962 based on: 6|21||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|21||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 267 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 270 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 270 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 270 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 270 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_270", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c29a
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|23||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253295), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 }, lastmod: Timestamp 6000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|25, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 100 version: 6|25||4fd9788f6dfcc1afddb64962 based on: 6|23||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|23||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 270 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 273 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 273 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 273 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 273 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_273", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c29e
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|25||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253315), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 }, lastmod: Timestamp 6000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|27, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 101 version: 6|27||4fd9788f6dfcc1afddb64962 based on: 6|25||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 273 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 276 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 276 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 276 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 276 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_276", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2a2
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|27||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253363), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 }, lastmod: Timestamp 6000|28, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|29, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 102 version: 6|29||4fd9788f6dfcc1afddb64962 based on: 6|27||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|27||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 276 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 279 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 279 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 279 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 279 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_279", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2a6
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|29||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253383), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 }, lastmod: Timestamp 6000|30, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|31, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 103 version: 6|31||4fd9788f6dfcc1afddb64962 based on: 6|29||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|29||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 279 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 282 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 282 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 282 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 282 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_282", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2aa
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|31||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253405), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 }, lastmod: Timestamp 6000|32, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|33, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 104 version: 6|33||4fd9788f6dfcc1afddb64962 based on: 6|31||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|31||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 282 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 285 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 285 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 285 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 285 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_285", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2ae
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|33||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253423), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|33, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 }, lastmod: Timestamp 6000|34, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|35, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 105 version: 6|35||4fd9788f6dfcc1afddb64962 based on: 6|33||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|33||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 285 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 288 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 288 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 288 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 288 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_288", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2b2
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|35||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253442), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|35, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 }, lastmod: Timestamp 6000|36, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|37, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 106 version: 6|37||4fd9788f6dfcc1afddb64962 based on: 6|35||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|35||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 288 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 291 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 291 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 291 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 291 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_291", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2b6
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|37||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253494), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|37, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 }, lastmod: Timestamp 6000|38, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|39, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 107 version: 6|39||4fd9788f6dfcc1afddb64962 based on: 6|37||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|37||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 291 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 294 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 294 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 294 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 294 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_294", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2ba
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|39||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253512), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 }, lastmod: Timestamp 6000|40, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|41, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 30ms sequenceNumber: 108 version: 6|41||4fd9788f6dfcc1afddb64962 based on: 6|39||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|39||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 294 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 297 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 297 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 297 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 297 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_297", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2be
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|41||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253559), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|41, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 }, lastmod: Timestamp 6000|42, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|43, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 109 version: 6|43||4fd9788f6dfcc1afddb64962 based on: 6|41||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|41||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 297 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 300 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 300 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 300 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 300 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_300", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2c2
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|43||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253577), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|43, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 }, lastmod: Timestamp 6000|44, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|45, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 110 version: 6|45||4fd9788f6dfcc1afddb64962 based on: 6|43||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|43||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 300 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 303 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 303 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 303 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 303 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_303", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2c6
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|45||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253596), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|45, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 }, lastmod: Timestamp 6000|46, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|47, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 111 version: 6|47||4fd9788f6dfcc1afddb64962 based on: 6|45||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|45||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 303 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 306 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 306 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 306 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 306 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_306", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2ca
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|47||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253614), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|47, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 }, lastmod: Timestamp 6000|48, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|49, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 112 version: 6|49||4fd9788f6dfcc1afddb64962 based on: 6|47||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|47||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 306 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 309 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 309 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 309 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 309 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_309", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2ce
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|49||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253632), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|49, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 }, lastmod: Timestamp 6000|50, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|51, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 113 version: 6|51||4fd9788f6dfcc1afddb64962 based on: 6|49||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|49||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 309 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 312 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 312 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 312 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 312 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_312", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789d349b5f269525c2d2
m30000| Thu Jun 14 01:37:33 [conn5] splitChunk accepted at version 6|51||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:33 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:33-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652253680), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|51, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 }, lastmod: Timestamp 6000|52, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|53, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:33 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 114 version: 6|53||4fd9788f6dfcc1afddb64962 based on: 6|51||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:33 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|51||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 312 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 315 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) W:20 r:290747 w:6210719 309ms
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 315 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 315 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 315 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_315", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2d6
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|53||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254008), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|53, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 }, lastmod: Timestamp 6000|54, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|55, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 115 version: 6|55||4fd9788f6dfcc1afddb64962 based on: 6|53||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|53||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 315 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 318 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 318 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 318 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 318 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_318", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2da
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|55||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254033), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|55, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 }, lastmod: Timestamp 6000|56, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|57, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 116 version: 6|57||4fd9788f6dfcc1afddb64962 based on: 6|55||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|55||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 318 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 321 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 321 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 321 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 321 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_321", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2de
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|57||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254053), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|57, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 }, lastmod: Timestamp 6000|58, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|59, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 117 version: 6|59||4fd9788f6dfcc1afddb64962 based on: 6|57||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|57||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 321 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 324 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 324 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 324 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 324 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_324", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2e2
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|59||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254078), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|59, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 }, lastmod: Timestamp 6000|60, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|61, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 327 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 118 version: 6|61||4fd9788f6dfcc1afddb64962 based on: 6|59||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|59||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 324 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 327 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 327 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 327 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_327", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2e6
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|61||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254099), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|61, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 }, lastmod: Timestamp 6000|62, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|63, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 119 version: 6|63||4fd9788f6dfcc1afddb64962 based on: 6|61||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|61||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 327 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 330 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 330 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 330 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 330 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_330", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2ea
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|63||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254118), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|63, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 }, lastmod: Timestamp 6000|64, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|65, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 120 version: 6|65||4fd9788f6dfcc1afddb64962 based on: 6|63||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|63||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 330 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 333 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 333 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) W:20 r:294138 w:6508325 290ms
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 333 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 333 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_333", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2ee
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|65||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254430), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|65, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 }, lastmod: Timestamp 6000|66, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|67, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 121 version: 6|67||4fd9788f6dfcc1afddb64962 based on: 6|65||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|65||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 333 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 336 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 336 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 336 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 336 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_336", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2f2
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|67||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254448), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|67, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 }, lastmod: Timestamp 6000|68, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|69, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 122 version: 6|69||4fd9788f6dfcc1afddb64962 based on: 6|67||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|67||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 336 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 339 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 339 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 339 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 339 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_339", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2f6
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|69||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254467), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|69, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 }, lastmod: Timestamp 6000|70, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|71, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 123 version: 6|71||4fd9788f6dfcc1afddb64962 based on: 6|69||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|69||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 339 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 342 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 342 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 342 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 342 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_342", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2fa
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|71||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254520), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|71, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 }, lastmod: Timestamp 6000|72, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|73, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 3ms sequenceNumber: 124 version: 6|73||4fd9788f6dfcc1afddb64962 based on: 6|71||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|71||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 342 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 345 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 345 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 345 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 345 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_345", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c2fe
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|73||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254543), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|73, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 }, lastmod: Timestamp 6000|74, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|75, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 125 version: 6|75||4fd9788f6dfcc1afddb64962 based on: 6|73||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|73||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 345 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 348 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 348 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 348 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 348 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_348", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c302
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|75||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254566), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|75, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 }, lastmod: Timestamp 6000|76, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|77, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 126 version: 6|77||4fd9788f6dfcc1afddb64962 based on: 6|75||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|75||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 348 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 351 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 351 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 351 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 351 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_351", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c306
m30000| Thu Jun 14 01:37:34 [conn5] splitChunk accepted at version 6|77||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:34 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:34-60", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652254587), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|77, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 }, lastmod: Timestamp 6000|78, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|79, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:34 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 127 version: 6|79||4fd9788f6dfcc1afddb64962 based on: 6|77||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:34 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|77||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 351 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 } (splitThreshold 943718) (migrate suggested)
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 354 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.3, filling with zeroes...
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 354 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) W:20 r:298343 w:6895795 379ms
m30000| Thu Jun 14 01:37:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 354 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 354 } -->> { : MaxKey, : MaxKey }
m30000| Thu Jun 14 01:37:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_354", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:35 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789e349b5f269525c30a
m30000| Thu Jun 14 01:37:35 [conn5] splitChunk accepted at version 6|79||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:35 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:35-61", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652255002), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|79, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, lastmod: Timestamp 6000|80, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 6000|81, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30000| Thu Jun 14 01:37:35 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30999| Thu Jun 14 01:37:35 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 128 version: 6|81||4fd9788f6dfcc1afddb64962 based on: 6|79||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:35 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|79||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 354 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:37:35 [conn] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|81||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } max: { files_id: MaxKey, n: MaxKey } to: shard0002:localhost:30002
m30999| Thu Jun 14 01:37:35 [conn] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 6|81||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } max: { files_id: MaxKey, n: MaxKey }) shard0000:localhost:30000 -> shard0002:localhost:30002
m30000| Thu Jun 14 01:37:35 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_357", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:37:35 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:37:35 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' acquired, ts : 4fd9789f349b5f269525c30b
m30000| Thu Jun 14 01:37:35 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:35-62", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652255009), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0002" } }
m30000| Thu Jun 14 01:37:35 [conn5] moveChunk request accepted at version 6|81||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:35 [conn5] moveChunk number of documents: 1
m30002| Thu Jun 14 01:37:35 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } -> { files_id: MaxKey, n: MaxKey }
m30001| Thu Jun 14 01:37:35 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.3, size: 128MB, took 3.484 secs
m30002| Thu Jun 14 01:37:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } -> { files_id: MaxKey, n: MaxKey }
m30002| Thu Jun 14 01:37:36 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-17", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652256019), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 3, step4 of 5: 0, step5 of 5: 1006 } }
m30000| Thu Jun 14 01:37:36 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:37:36 [conn5] moveChunk setting version to: 7|0||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:36 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:37:36 [conn5] moveChunk updating self version to: 7|1||4fd9788f6dfcc1afddb64962 through { files_id: MinKey, n: MinKey } -> { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 0 } for collection 'sharded_files_id_n.fs.chunks'
m30000| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-63", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652256024), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0002" } }
m30000| Thu Jun 14 01:37:36 [conn5] doing delete inline
m30000| Thu Jun 14 01:37:36 [conn5] moveChunk deleted: 1
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 129 version: 7|1||4fd9788f6dfcc1afddb64962 based on: 6|81||4fd9788f6dfcc1afddb64962
m30000| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30000:1339652242:952203482' unlocked.
m30000| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-64", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56512", time: new Date(1339652256043), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1009, step5 of 6: 5, step6 of 6: 1 } }
m30000| Thu Jun 14 01:37:36 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_357", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:158362 r:204599 w:2741037 reslen:37 1035ms
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 357 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 357 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 357 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 357 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_357", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dcf2
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|0||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256060), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 }, lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 130 version: 7|3||4fd9788f6dfcc1afddb64962 based on: 7|1||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|0||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 357 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 360 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 360 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 360 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 360 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_360", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dcf6
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|3||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256086), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 }, lastmod: Timestamp 7000|4, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|5, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 131 version: 7|5||4fd9788f6dfcc1afddb64962 based on: 7|3||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|3||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 360 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 363 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 363 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 363 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 363 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_363", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dcfa
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|5||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256110), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 }, lastmod: Timestamp 7000|6, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|7, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 132 version: 7|7||4fd9788f6dfcc1afddb64962 based on: 7|5||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|5||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 363 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 366 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 366 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 366 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 366 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_366", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dcfe
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|7||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256135), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 }, lastmod: Timestamp 7000|8, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|9, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 133 version: 7|9||4fd9788f6dfcc1afddb64962 based on: 7|7||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|7||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 366 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 369 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 369 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 369 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 369 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_369", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd02
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|9||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256154), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 }, lastmod: Timestamp 7000|10, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|11, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 372 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 372 } -->> { : MaxKey, : MaxKey }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 134 version: 7|11||4fd9788f6dfcc1afddb64962 based on: 7|9||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|9||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 369 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 372 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 372 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_372", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd06
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|11||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256200), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 }, lastmod: Timestamp 7000|12, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|13, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 135 version: 7|13||4fd9788f6dfcc1afddb64962 based on: 7|11||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|11||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 372 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 375 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 375 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 375 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 375 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_375", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd0a
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|13||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256219), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 }, lastmod: Timestamp 7000|14, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|15, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 136 version: 7|15||4fd9788f6dfcc1afddb64962 based on: 7|13||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|13||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 375 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 378 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 378 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 378 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 378 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_378", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd0e
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|15||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256239), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 }, lastmod: Timestamp 7000|16, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|17, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 137 version: 7|17||4fd9788f6dfcc1afddb64962 based on: 7|15||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|15||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 378 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 381 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 381 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 381 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 381 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_381", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd12
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|17||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256259), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 }, lastmod: Timestamp 7000|18, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|19, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 138 version: 7|19||4fd9788f6dfcc1afddb64962 based on: 7|17||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|17||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 381 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 384 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 384 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 384 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 384 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_384", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd16
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|19||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256279), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 }, lastmod: Timestamp 7000|20, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|21, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 139 version: 7|21||4fd9788f6dfcc1afddb64962 based on: 7|19||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|19||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 384 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 387 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 387 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 387 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 387 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_387", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd1a
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|21||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256297), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 }, lastmod: Timestamp 7000|22, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|23, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 140 version: 7|23||4fd9788f6dfcc1afddb64962 based on: 7|21||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|21||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 387 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 390 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 390 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 390 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 390 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_390", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd1e
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|23||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256315), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 }, lastmod: Timestamp 7000|24, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|25, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 141 version: 7|25||4fd9788f6dfcc1afddb64962 based on: 7|23||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|23||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 390 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 393 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 393 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 393 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 393 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_393", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd22
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|25||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256336), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 }, lastmod: Timestamp 7000|26, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|27, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 142 version: 7|27||4fd9788f6dfcc1afddb64962 based on: 7|25||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|25||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 393 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 396 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 396 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 396 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 396 } -->> { : MaxKey, : MaxKey }
m30002| Thu Jun 14 01:37:36 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 399 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('4fd97892353ee533a5bdfb63')n_396", configdb: "localhost:29000" }
m30002| Thu Jun 14 01:37:36 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' acquired, ts : 4fd978a0af0847faaef2dd26
m30002| Thu Jun 14 01:37:36 [conn5] splitChunk accepted at version 7|27||4fd9788f6dfcc1afddb64962
m30002| Thu Jun 14 01:37:36 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:37:36-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:45510", time: new Date(1339652256356), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 }, max: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 399 }, lastmod: Timestamp 7000|28, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') }, right: { min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 399 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 7000|29, lastmodEpoch: ObjectId('4fd9788f6dfcc1afddb64962') } } }
m30002| Thu Jun 14 01:37:36 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/domU-12-31-39-01-70-B4:30002:1339652246:784930048' unlocked.
m30999| Thu Jun 14 01:37:36 [conn] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1ms sequenceNumber: 143 version: 7|29||4fd9788f6dfcc1afddb64962 based on: 7|27||4fd9788f6dfcc1afddb64962
m30999| Thu Jun 14 01:37:36 [conn] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 7|27||000000000000000000000000 min: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 396 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('4fd97892353ee533a5bdfb63'), n: 399 } (splitThreshold 943718) (migrate suggested)
m30002| Thu Jun 14 01:37:36 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('4fd97892353ee533a5bdfb63'), : 399 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:37:36 [conn4] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('4fd97892353ee533a5bdfb63'), root: "fs", partialOk: true, startAt: 81, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 156 locks(micros) W:51 r:115819 w:8678247 reslen:197 145ms
m30000| Thu Jun 14 01:37:36 [conn4] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('4fd97892353ee533a5bdfb63'), root: "fs", partialOk: true, startAt: 237, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 120 locks(micros) W:20 r:348243 w:6895795 reslen:197 133ms
sh25747| added file: { _id: ObjectId('4fd97892353ee533a5bdfb63'), filename: "mongod", chunkSize: 262144, uploadDate: new Date(1339652256816), md5: "cd2eb30417f1f1fb1c666ccb462da035", length: 105292849 }
m30999| Thu Jun 14 01:37:36 [conn] end connection 127.0.0.1:53400 (1 connection now open)
sh25747| done!
fileObj: {
"_id" : ObjectId("4fd97892353ee533a5bdfb63"),
"filename" : "mongod",
"chunkSize" : 262144,
"uploadDate" : ISODate("2012-06-14T05:37:36.816Z"),
"md5" : "cd2eb30417f1f1fb1c666ccb462da035",
"length" : 105292849
}
m30002| Thu Jun 14 01:37:36 [conn3] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('4fd97892353ee533a5bdfb63'), partialOk: true, startAt: 6, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 39 locks(micros) W:77 r:68746 reslen:197 110ms
m30001| Thu Jun 14 01:37:37 [conn3] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('4fd97892353ee533a5bdfb63'), partialOk: true, startAt: 81, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 156 locks(micros) W:196 r:82943 reslen:197 145ms
m30000| Thu Jun 14 01:37:37 [conn3] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('4fd97892353ee533a5bdfb63'), partialOk: true, startAt: 237, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 120 locks(micros) W:199 r:79665 reslen:197 159ms
filemd5 output: {
"md5state" : BinData(0,"iCE1MgAAAAAa2sK0e0QIA4TeiRv4xdRscGVkX3B0cklONW1vbmdvMThTY29wZWREYkNvbm5lY3Rpb25FRTVyZXNldEVQUzJfALbvCQkAAACIBe8JhTSkAA=="),
"numChunks" : 402,
"md5" : "cd2eb30417f1f1fb1c666ccb462da035",
"ok" : 1
}
m30000| Thu Jun 14 01:37:38 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.3, size: 128MB, took 3.972 secs
m30000| Thu Jun 14 01:37:38 [clientcursormon] mem (MB) res:190 virt:529 mapped:384
m30001| Thu Jun 14 01:37:39 [clientcursormon] mem (MB) res:190 virt:520 mapped:384
m30002| Thu Jun 14 01:37:39 [clientcursormon] mem (MB) res:171 virt:455 mapped:320
m29000| Thu Jun 14 01:37:39 [clientcursormon] mem (MB) res:33 virt:153 mapped:32
m30999| Thu Jun 14 01:37:39 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:37:39 [conn6] end connection 127.0.0.1:54734 (12 connections now open)
m30000| Thu Jun 14 01:37:39 [conn3] end connection 127.0.0.1:56505 (6 connections now open)
m29000| Thu Jun 14 01:37:39 [conn5] end connection 127.0.0.1:54733 (11 connections now open)
m30001| Thu Jun 14 01:37:39 [conn3] end connection 127.0.0.1:44409 (7 connections now open)
m29000| Thu Jun 14 01:37:39 [conn4] end connection 127.0.0.1:54732 (10 connections now open)
m30000| Thu Jun 14 01:37:39 [conn4] end connection 127.0.0.1:56509 (5 connections now open)
m30001| Thu Jun 14 01:37:39 [conn4] end connection 127.0.0.1:44413 (6 connections now open)
m30002| Thu Jun 14 01:37:39 [conn3] end connection 127.0.0.1:45503 (6 connections now open)
m30002| Thu Jun 14 01:37:39 [conn4] end connection 127.0.0.1:45507 (5 connections now open)
m30002| Thu Jun 14 01:37:39 [conn5] end connection 127.0.0.1:45510 (4 connections now open)
m30001| Thu Jun 14 01:37:39 [conn5] end connection 127.0.0.1:44416 (5 connections now open)
m30000| Thu Jun 14 01:37:39 [conn5] end connection 127.0.0.1:56512 (4 connections now open)
m29000| Thu Jun 14 01:37:39 [conn3] end connection 127.0.0.1:54729 (10 connections now open)
m30001| Thu Jun 14 01:37:39 [conn8] end connection 127.0.0.1:44434 (4 connections now open)
Thu Jun 14 01:37:40 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:37:40 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:37:40 [interruptThread] now exiting
m30000| Thu Jun 14 01:37:40 dbexit:
m30000| Thu Jun 14 01:37:40 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:37:40 [interruptThread] closing listening socket: 29
m30000| Thu Jun 14 01:37:40 [interruptThread] closing listening socket: 30
m30000| Thu Jun 14 01:37:40 [interruptThread] closing listening socket: 31
m30000| Thu Jun 14 01:37:40 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:37:40 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:37:40 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:37:40 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:37:40 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:37:41 [conn7] end connection 127.0.0.1:45526 (3 connections now open)
m30001| Thu Jun 14 01:37:41 [conn6] end connection 127.0.0.1:44424 (3 connections now open)
m29000| Thu Jun 14 01:37:40 [conn8] end connection 127.0.0.1:54752 (8 connections now open)
m29000| Thu Jun 14 01:37:41 [conn9] end connection 127.0.0.1:54754 (7 connections now open)
m30000| Thu Jun 14 01:37:41 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:37:41 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:37:41 dbexit: really exiting now
Thu Jun 14 01:37:41 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:37:41 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:37:41 [interruptThread] now exiting
m30001| Thu Jun 14 01:37:41 dbexit:
m30001| Thu Jun 14 01:37:41 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:37:41 [interruptThread] closing listening socket: 32
m30001| Thu Jun 14 01:37:41 [interruptThread] closing listening socket: 33
m30001| Thu Jun 14 01:37:41 [interruptThread] closing listening socket: 34
m30001| Thu Jun 14 01:37:41 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:37:41 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:37:41 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:37:41 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:37:41 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:37:42 [conn6] end connection 127.0.0.1:45521 (2 connections now open)
m29000| Thu Jun 14 01:37:42 [conn10] end connection 127.0.0.1:54757 (6 connections now open)
m29000| Thu Jun 14 01:37:42 [conn11] end connection 127.0.0.1:54758 (5 connections now open)
m30001| Thu Jun 14 01:37:42 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:37:42 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:37:42 dbexit: really exiting now
Thu Jun 14 01:37:42 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:37:42 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:37:42 [interruptThread] now exiting
m30002| Thu Jun 14 01:37:42 dbexit:
m30002| Thu Jun 14 01:37:42 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:37:42 [interruptThread] closing listening socket: 35
m30002| Thu Jun 14 01:37:42 [interruptThread] closing listening socket: 36
m30002| Thu Jun 14 01:37:42 [interruptThread] closing listening socket: 37
m30002| Thu Jun 14 01:37:42 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:37:42 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:37:42 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:37:42 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:37:42 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:37:43 [conn12] end connection 127.0.0.1:54761 (4 connections now open)
m29000| Thu Jun 14 01:37:43 [conn13] end connection 127.0.0.1:54762 (3 connections now open)
m29000| Thu Jun 14 01:37:43 [conn7] end connection 127.0.0.1:54750 (2 connections now open)
m30002| Thu Jun 14 01:37:43 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:37:43 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:37:43 dbexit: really exiting now
Thu Jun 14 01:37:43 shell: stopped mongo program on port 30002
m29000| Thu Jun 14 01:37:44 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:37:44 [interruptThread] now exiting
m29000| Thu Jun 14 01:37:44 dbexit:
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:37:44 [interruptThread] closing listening socket: 38
m29000| Thu Jun 14 01:37:44 [interruptThread] closing listening socket: 39
m29000| Thu Jun 14 01:37:44 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:37:44 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:37:44 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:37:44 dbexit: really exiting now
Thu Jun 14 01:37:45 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 67.854 seconds ***
68115.088940ms
Thu Jun 14 01:37:46 [initandlisten] connection accepted from 127.0.0.1:35000 #34 (21 connections now open)
*******************************************
Test : group_slaveok.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/group_slaveok.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/group_slaveok.js";TestData.testFile = "group_slaveok.js";TestData.testName = "group_slaveok";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:37:46 2012
MongoDB shell version: 2.1.2-pre-
null
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "groupSlaveOk-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "groupSlaveOk",
"shard" : 0,
"node" : 0,
"set" : "groupSlaveOk-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/groupSlaveOk-rs0-0'
Thu Jun 14 01:37:46 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet groupSlaveOk-rs0 --dbpath /data/db/groupSlaveOk-rs0-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:37:46
m31100| Thu Jun 14 01:37:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:37:46
m31100| Thu Jun 14 01:37:46 [initandlisten] MongoDB starting : pid=25792 port=31100 dbpath=/data/db/groupSlaveOk-rs0-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:37:46 [initandlisten]
m31100| Thu Jun 14 01:37:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:37:46 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:37:46 [initandlisten]
m31100| Thu Jun 14 01:37:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:37:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:37:46 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:37:46 [initandlisten]
m31100| Thu Jun 14 01:37:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:37:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:37:46 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:37:46 [initandlisten] options: { dbpath: "/data/db/groupSlaveOk-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "groupSlaveOk-rs0", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:37:46 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:37:47 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:37:47 [initandlisten] connection accepted from 10.255.119.66:60718 #1 (1 connection now open)
m31100| Thu Jun 14 01:37:47 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:37:47 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Thu Jun 14 01:37:47 [initandlisten] connection accepted from 127.0.0.1:60355 #2 (2 connections now open)
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "groupSlaveOk-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "groupSlaveOk",
"shard" : 0,
"node" : 1,
"set" : "groupSlaveOk-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/groupSlaveOk-rs0-1'
Thu Jun 14 01:37:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet groupSlaveOk-rs0 --dbpath /data/db/groupSlaveOk-rs0-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:37:47
m31101| Thu Jun 14 01:37:47 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:37:47
m31101| Thu Jun 14 01:37:47 [initandlisten] MongoDB starting : pid=25808 port=31101 dbpath=/data/db/groupSlaveOk-rs0-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:37:47 [initandlisten]
m31101| Thu Jun 14 01:37:47 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:37:47 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:37:47 [initandlisten]
m31101| Thu Jun 14 01:37:47 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:37:47 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:37:47 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:37:47 [initandlisten]
m31101| Thu Jun 14 01:37:47 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:37:47 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:37:47 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:37:47 [initandlisten] options: { dbpath: "/data/db/groupSlaveOk-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "groupSlaveOk-rs0", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:37:47 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:37:47 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:37:47 [initandlisten] connection accepted from 10.255.119.66:45089 #1 (1 connection now open)
m31101| Thu Jun 14 01:37:47 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:37:47 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
{
"replSetInitiate" : {
"_id" : "groupSlaveOk-rs0",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
}
]
}
}
m31101| Thu Jun 14 01:37:47 [initandlisten] connection accepted from 127.0.0.1:59922 #2 (2 connections now open)
m31100| Thu Jun 14 01:37:47 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:37:47 [conn2] replSet replSetInitiate config object parses ok, 2 members specified
m31101| Thu Jun 14 01:37:47 [initandlisten] connection accepted from 10.255.119.66:45091 #3 (3 connections now open)
m31100| Thu Jun 14 01:37:47 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:37:47 [conn2] ******
m31100| Thu Jun 14 01:37:47 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:37:47 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:37:47 [FileAllocator] creating directory /data/db/groupSlaveOk-rs0-0/_tmp
m31100| Thu Jun 14 01:37:47 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/local.ns, size: 16MB, took 0.234 secs
m31100| Thu Jun 14 01:37:47 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:37:48 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/local.0, size: 64MB, took 1.231 secs
m31100| Thu Jun 14 01:37:48 [conn2] ******
m31100| Thu Jun 14 01:37:48 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:37:48 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:37:48 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:37:48 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "groupSlaveOk-rs0", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1510575 w:33 reslen:112 1509ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31100| Thu Jun 14 01:37:57 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:37:57 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:37:57 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31100| Thu Jun 14 01:37:57 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:37:57 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:37:57 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:37:57 [initandlisten] connection accepted from 10.255.119.66:60724 #3 (3 connections now open)
m31101| Thu Jun 14 01:37:57 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:37:57 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:37:57 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:37:57 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:37:57 [FileAllocator] creating directory /data/db/groupSlaveOk-rs0-1/_tmp
m31101| Thu Jun 14 01:37:57 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/local.ns, size: 16MB, took 0.232 secs
m31101| Thu Jun 14 01:37:57 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/local.0, filling with zeroes...
m31101| Thu Jun 14 01:37:57 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/local.0, size: 16MB, took 0.253 secs
m31101| Thu Jun 14 01:37:57 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:37:57 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:37:57 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31101| Thu Jun 14 01:37:57 [rsSync] ******
m31101| Thu Jun 14 01:37:57 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:37:57 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/local.1, filling with zeroes...
m31101| Thu Jun 14 01:37:58 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/local.1, size: 64MB, took 1.16 secs
m31101| Thu Jun 14 01:37:58 [rsSync] ******
m31101| Thu Jun 14 01:37:58 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:37:58 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:37:59 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:37:59 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
m31101| Thu Jun 14 01:37:59 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:37:59 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31100| Thu Jun 14 01:38:05 [rsMgr] replSet info electSelf 0
m31101| Thu Jun 14 01:38:05 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:38:05 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:38:05 [rsMgr] replSet PRIMARY
m31101| Thu Jun 14 01:38:05 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31100| Thu Jun 14 01:38:06 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:38:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:38:07 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/admin.ns, size: 16MB, took 0.225 secs
m31100| Thu Jun 14 01:38:07 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/admin.0, filling with zeroes...
m31100| Thu Jun 14 01:38:07 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/admin.0, size: 16MB, took 0.275 secs
m31100| Thu Jun 14 01:38:07 [conn2] build index admin.foo { _id: 1 }
m31100| Thu Jun 14 01:38:07 [conn2] build index done. scanned 0 total records. 0.032 secs
m31100| Thu Jun 14 01:38:07 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:1510575 w:543089 542ms
ReplSetTest Timestamp(1339652287000, 1)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
m31101| Thu Jun 14 01:38:14 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:38:14 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:38:14 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:38:14 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:38:14 [initandlisten] connection accepted from 10.255.119.66:60725 #4 (4 connections now open)
m31101| Thu Jun 14 01:38:14 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:38:14 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:38:14 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:38:14 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:38:14 [initandlisten] connection accepted from 10.255.119.66:60726 #5 (5 connections now open)
m31101| Thu Jun 14 01:38:14 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:38:15 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/admin.ns, size: 16MB, took 0.268 secs
m31101| Thu Jun 14 01:38:15 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/admin.0, filling with zeroes...
m31101| Thu Jun 14 01:38:15 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/admin.0, size: 16MB, took 0.263 secs
m31101| Thu Jun 14 01:38:15 [rsSync] build index admin.foo { _id: 1 }
m31101| Thu Jun 14 01:38:15 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:38:15 [rsSync] build index done. scanned 1 total records. 0 secs
m31100| Thu Jun 14 01:38:15 [conn5] end connection 10.255.119.66:60726 (4 connections now open)
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60727 #6 (5 connections now open)
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync finishing up
m31100| Thu Jun 14 01:38:15 [conn6] end connection 10.255.119.66:60727 (4 connections now open)
m31101| Thu Jun 14 01:38:15 [rsSync] replSet set minValid=4fd978bf:1
m31101| Thu Jun 14 01:38:15 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:38:15 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:38:15 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:38:15 [conn4] end connection 10.255.119.66:60725 (3 connections now open)
{
"ts" : Timestamp(1339652287000, 1),
"h" : NumberLong("5580246562954543121"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd978be3610552c87c78fab"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652287000:1 and latest is 1339652287000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
ReplSetTest await synced=true
Thu Jun 14 01:38:15 starting new replica set monitor for replica set groupSlaveOk-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:38:15 successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set groupSlaveOk-rs0
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60728 #7 (4 connections now open)
Thu Jun 14 01:38:15 changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from groupSlaveOk-rs0/
Thu Jun 14 01:38:15 trying to add new host domU-12-31-39-01-70-B4:31100 to replica set groupSlaveOk-rs0
Thu Jun 14 01:38:15 successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set groupSlaveOk-rs0
Thu Jun 14 01:38:15 trying to add new host domU-12-31-39-01-70-B4:31101 to replica set groupSlaveOk-rs0
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60729 #8 (5 connections now open)
Thu Jun 14 01:38:15 successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set groupSlaveOk-rs0
m31101| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:45098 #4 (4 connections now open)
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60731 #9 (6 connections now open)
m31100| Thu Jun 14 01:38:15 [conn7] end connection 10.255.119.66:60728 (5 connections now open)
Thu Jun 14 01:38:15 Primary for replica set groupSlaveOk-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:45100 #5 (5 connections now open)
Thu Jun 14 01:38:15 replica set monitor for replica set groupSlaveOk-rs0 started, address is groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:38:15 [ReplicaSetMonitorWatcher] starting
Resetting db path '/data/db/groupSlaveOk-config0'
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60733 #10 (6 connections now open)
Thu Jun 14 01:38:15 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/groupSlaveOk-config0
m29000| Thu Jun 14 01:38:15
m29000| Thu Jun 14 01:38:15 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:38:15
m29000| Thu Jun 14 01:38:15 [initandlisten] MongoDB starting : pid=25862 port=29000 dbpath=/data/db/groupSlaveOk-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:38:15 [initandlisten]
m29000| Thu Jun 14 01:38:15 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:38:15 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:38:15 [initandlisten]
m29000| Thu Jun 14 01:38:15 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:38:15 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:38:15 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:38:15 [initandlisten]
m29000| Thu Jun 14 01:38:15 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:38:15 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:38:15 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:38:15 [initandlisten] options: { dbpath: "/data/db/groupSlaveOk-config0", port: 29000 }
m29000| Thu Jun 14 01:38:15 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:38:15 [websvr] admin web console waiting for connections on port 30000
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 127.0.0.1:54785 #1 (1 connection now open)
ShardingTest groupSlaveOk :
{
"config" : "domU-12-31-39-01-70-B4:29000",
"shards" : [
connection to groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
]
}
Thu Jun 14 01:38:15 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:52881 #2 (2 connections now open)
m29000| Thu Jun 14 01:38:15 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:38:15 [FileAllocator] creating directory /data/db/groupSlaveOk-config0/_tmp
m30999| Thu Jun 14 01:38:15 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:38:15 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25877 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:38:15 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:38:15 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:38:15 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30999 }
m29000| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:52883 #3 (3 connections now open)
m31101| Thu Jun 14 01:38:15 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:38:15 [initandlisten] connection accepted from 10.255.119.66:60739 #11 (7 connections now open)
m29000| Thu Jun 14 01:38:16 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-config0/config.ns, size: 16MB, took 0.287 secs
m29000| Thu Jun 14 01:38:16 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:38:16 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-config0/config.0, size: 16MB, took 0.355 secs
m29000| Thu Jun 14 01:38:16 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:38:16 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn2] insert config.settings keyUpdates:0 locks(micros) w:660809 660ms
m29000| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:52888 #4 (4 connections now open)
m29000| Thu Jun 14 01:38:16 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:38:16 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:38:16 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:16 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:38:16 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:38:16 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:38:16 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:38:16 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:38:16
m30999| Thu Jun 14 01:38:16 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:52889 #5 (5 connections now open)
m29000| Thu Jun 14 01:38:16 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:16 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339652296:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:38:16 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:16 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:38:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652296:1804289383' acquired, ts : 4fd978c8642fae0555d4fe14
m30999| Thu Jun 14 01:38:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652296:1804289383' unlocked.
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60745 #12 (8 connections now open)
m31101| Thu Jun 14 01:38:16 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:38:16 [rsSync] replSet SECONDARY
ShardingTest undefined going to add shard : groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:16 [mongosMain] connection accepted from 127.0.0.1:53443 #1 (1 connection now open)
m30999| Thu Jun 14 01:38:16 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:38:16 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:38:16 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:16 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:38:16 [conn] starting new replica set monitor for replica set groupSlaveOk-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:16 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set groupSlaveOk-rs0
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60747 #13 (9 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from groupSlaveOk-rs0/
m30999| Thu Jun 14 01:38:16 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set groupSlaveOk-rs0
m30999| Thu Jun 14 01:38:16 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set groupSlaveOk-rs0
m30999| Thu Jun 14 01:38:16 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set groupSlaveOk-rs0
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60748 #14 (10 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set groupSlaveOk-rs0
m31101| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:45117 #6 (6 connections now open)
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60750 #15 (11 connections now open)
m31100| Thu Jun 14 01:38:16 [conn13] end connection 10.255.119.66:60747 (10 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] Primary for replica set groupSlaveOk-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:45119 #7 (7 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] replica set monitor for replica set groupSlaveOk-rs0 started, address is groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:16 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60752 #16 (11 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] going to add shard: { _id: "groupSlaveOk-rs0", host: "groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }
{ "shardAdded" : "groupSlaveOk-rs0", "ok" : 1 }
m30999| Thu Jun 14 01:38:16 [mongosMain] connection accepted from 10.255.119.66:37184 #2 (2 connections now open)
m30999| Thu Jun 14 01:38:16 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:38:16 [conn] best shard for new allocation is shard: groupSlaveOk-rs0:groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101 mapped: 112 writeLock: 0
m30999| Thu Jun 14 01:38:16 [conn] put [test] on: groupSlaveOk-rs0:groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.$cmd msg id:48 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] single query: test.$cmd { drop: "groupSlaveOk" } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:38:16 [conn] DROP: test.groupSlaveOk
m31100| Thu Jun 14 01:38:16 [initandlisten] connection accepted from 10.255.119.66:60754 #17 (12 connections now open)
m30999| Thu Jun 14 01:38:16 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:16 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd978c8642fae0555d4fe13
m30999| Thu Jun 14 01:38:16 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd978c8642fae0555d4fe13
m30999| Thu Jun 14 01:38:16 [conn] initializing shard connection to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:38:16 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd978c8642fae0555d4fe13'), authoritative: true }
m30999| Thu Jun 14 01:38:16 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:38:16 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:16 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m31100| Thu Jun 14 01:38:16 [conn17] CMD: drop test.groupSlaveOk
m31100| Thu Jun 14 01:38:16 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:49 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:50 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:51 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:52 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:53 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:54 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:55 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:56 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:57 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:58 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:59 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:60 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:61 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:62 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:63 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:64 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:65 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:66 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:67 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:68 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:69 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:70 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:71 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:72 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:73 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:74 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:75 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:76 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:77 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:78 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:79 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:80 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:81 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:82 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:83 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:84 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:85 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:86 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:87 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:88 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:89 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:90 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:91 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:92 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:93 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:94 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:95 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:96 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:97 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:98 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:99 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:100 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:101 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:102 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:103 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:104 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:105 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:106 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:107 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:108 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:109 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:110 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:111 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:112 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:113 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:114 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:115 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:116 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:117 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:118 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:119 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:120 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:121 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:122 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:123 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:124 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:125 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:126 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:127 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:128 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:129 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:130 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:131 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:132 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:133 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:134 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:135 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:136 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:137 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:138 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:139 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:140 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:141 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:142 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:143 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:144 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:145 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:146 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:147 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:148 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:149 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:150 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:151 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:152 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:153 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:154 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:155 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:156 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:157 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:158 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:159 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:160 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:161 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:162 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:163 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:164 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:165 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:166 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:167 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:168 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:169 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:170 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:171 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:172 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:173 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:174 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:175 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:176 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:177 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:178 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:179 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:180 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:181 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:182 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:183 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:184 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:185 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:186 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:187 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:188 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:189 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:190 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:191 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:192 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:193 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:194 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:195 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:196 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:197 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:198 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:199 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:200 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:201 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:202 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:203 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:204 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:205 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:206 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:207 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:208 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:209 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:210 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:211 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:212 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:213 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:214 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:215 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:216 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:217 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:218 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:219 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:220 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:221 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:222 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:223 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:224 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:225 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:226 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:227 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:228 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:229 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:230 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:231 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:232 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:233 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:234 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:235 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:236 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:237 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:238 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:239 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:240 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:241 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:242 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:243 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:244 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:245 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:246 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:247 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:248 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:249 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:250 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:251 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:252 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:253 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:254 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:255 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:256 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:257 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:258 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:259 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:260 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:261 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:262 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:263 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:264 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:265 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:266 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:267 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:268 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:269 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:270 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:271 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:272 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:273 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:274 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:275 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:276 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:277 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:278 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:279 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:280 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:281 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:282 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:283 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:284 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:285 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:286 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:287 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:288 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:289 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:290 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:291 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:292 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:293 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:294 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:295 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:296 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:297 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:298 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:299 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:300 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:301 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:302 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:303 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:304 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:305 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:306 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:307 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:308 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:309 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:310 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:311 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:312 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:313 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:314 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:315 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:316 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:317 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:318 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:319 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:320 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:321 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:322 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:323 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:324 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:325 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:326 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:327 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:328 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:329 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:330 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:331 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:332 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:333 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:334 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:335 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:336 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:337 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:338 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:339 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:340 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:341 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:342 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:343 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:344 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:345 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:346 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:347 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.groupSlaveOk msg id:348 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] write: test.groupSlaveOk
m30999| Thu Jun 14 01:38:16 [conn] Request::process ns: test.$cmd msg id:349 attempt: 0
m30999| Thu Jun 14 01:38:16 [conn] single query: test.$cmd { getlasterror: 1.0 } ntoreturn: -1 options : 0
m31100| Thu Jun 14 01:38:17 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m29000| Thu Jun 14 01:38:17 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-config0/config.1, size: 32MB, took 0.768 secs
m31100| Thu Jun 14 01:38:17 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/test.ns, size: 16MB, took 0.562 secs
m31100| Thu Jun 14 01:38:17 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-0/test.0, filling with zeroes...
m31100| Thu Jun 14 01:38:17 [slaveTracking] build index local.slaves { _iThu Jun 14 01:38:17 [clientcursormon] mem (MB) res:16 virt:129 mapped:0
d: 1 }
m31100| Thu Jun 14 01:38:17 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:38:17 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-0/test.0, size: 16MB, took 0.286 secs
m31100| Thu Jun 14 01:38:17 [conn17] build index test.groupSlaveOk { _id: 1 }
m31100| Thu Jun 14 01:38:17 [conn17] build index done. scanned 0 total records. 0 secs
m31100| Thu Jun 14 01:38:17 [conn17] insert test.groupSlaveOk keyUpdates:0 locks(micros) W:514 w:858844 858ms
m30999| Thu Jun 14 01:38:17 [conn] Request::process ns: config.version msg id:350 attempt: 0
m30999| Thu Jun 14 01:38:17 [conn] shard query: config.version {}
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:17 [conn] initializing shard connection to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:38:17 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd978c8642fae0555d4fe13'), authoritative: true }
m31100| Thu Jun 14 01:38:17 [initandlisten] connection accepted from 10.255.119.66:60755 #18 (13 connections now open)
m30999| Thu Jun 14 01:38:17 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m30999| Thu Jun 14 01:38:17 [conn] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:38:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:17 [conn] connected connection!
m30999| Thu Jun 14 01:38:17 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd978c8642fae0555d4fe13
m30999| Thu Jun 14 01:38:17 [conn] initializing shard connection to domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:38:17 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd978c8642fae0555d4fe13'), authoritative: true }
m30999| Thu Jun 14 01:38:17 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:38:17 [WriteBackListener-domU-12-31-39-01-70-B4:29000] domU-12-31-39-01-70-B4:29000 is not a shard node
m29000| Thu Jun 14 01:38:17 [initandlisten] connection accepted from 10.255.119.66:52901 #6 (6 connections now open)
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "groupSlaveOk-rs0", "host" : "groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "groupSlaveOk-rs0" }
ReplSetTest Timestamp(1339652297000, 300)
{
"ts" : Timestamp(1339652287000, 1),
"h" : NumberLong("5580246562954543121"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd978be3610552c87c78fab"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652287000:1 and latest is 1339652297000:300
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
m30999| Thu Jun 14 01:38:17 [conn] initial sharding result : { initialized: true, ok: 1.0 }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] Request::process ns: config.version msg id:351 attempt: 0
m30999| Thu Jun 14 01:38:17 [conn] shard query: config.version {}
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] Request::process ns: config.shards msg id:352 attempt: 0
m30999| Thu Jun 14 01:38:17 [conn] shard query: config.shards { query: {}, orderby: { _id: 1.0 } }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "groupSlaveOk-rs0", host: "groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] Request::process ns: config.databases msg id:353 attempt: 0
m30999| Thu Jun 14 01:38:17 [conn] shard query: config.databases { query: {}, orderby: { name: 1.0 } }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:17 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m31101| Thu Jun 14 01:38:18 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/test.ns, filling with zeroes...
m31101| Thu Jun 14 01:38:18 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/test.ns, size: 16MB, took 0.292 secs
m31101| Thu Jun 14 01:38:18 [FileAllocator] allocating new datafile /data/db/groupSlaveOk-rs0-1/test.0, filling with zeroes...
m31101| Thu Jun 14 01:38:19 [FileAllocator] done allocating datafile /data/db/groupSlaveOk-rs0-1/test.0, size: 16MB, took 0.444 secs
m31101| Thu Jun 14 01:38:19 [rsSync] build index test.groupSlaveOk { _id: 1 }
m31101| Thu Jun 14 01:38:19 [rsSync] build index done. scanned 0 total records. 0 secs
{
"ts" : Timestamp(1339652297000, 300),
"h" : NumberLong("2101077862198481663"),
"op" : "i",
"ns" : "test.groupSlaveOk",
"o" : {
"_id" : ObjectId("4fd978c83610552c87c790d9"),
"i" : 9
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652297000:300 and latest is 1339652297000:300
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 301
ReplSetTest await synced=true
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:38:19 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:38:19 [interruptThread] now exiting
m31100| Thu Jun 14 01:38:19 dbexit:
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:38:19 [interruptThread] closing listening socket: 30
m31100| Thu Jun 14 01:38:19 [interruptThread] closing listening socket: 31
m31100| Thu Jun 14 01:38:19 [interruptThread] closing listening socket: 32
m31100| Thu Jun 14 01:38:19 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:38:19 [conn3] end connection 10.255.119.66:45091 (6 connections now open)
m31101| Thu Jun 14 01:38:19 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:38:19 [conn1] end connection 10.255.119.66:60718 (12 connections now open)
m30999| Thu Jun 14 01:38:19 [WriteBackListener-domU-12-31-39-01-70-B4:31100] Socket recv() conn closed? 10.255.119.66:31100
m30999| Thu Jun 14 01:38:19 [WriteBackListener-domU-12-31-39-01-70-B4:31100] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:38:19 [WriteBackListener-domU-12-31-39-01-70-B4:31100] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:38:19 [WriteBackListener-domU-12-31-39-01-70-B4:31100] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd978c8642fae0555d4fe13') }
m31100| Thu Jun 14 01:38:19 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:38:19 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:38:19 dbexit: really exiting now
m30999| Thu Jun 14 01:38:19 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd978c8642fae0555d4fe13') }
m31101| Thu Jun 14 01:38:20 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:38:20 shell: stopped mongo program on port 31100
Thu Jun 14 01:38:20 DBClientCursor::init call() failed
Thu Jun 14 01:38:20 query failed : admin.$cmd { ismaster: 1.0 } to: 127.0.0.1:31100
ReplSetTest Could not call ismaster on node 0
{
"set" : "groupSlaveOk-rs0",
"date" : ISODate("2012-06-14T05:38:20Z"),
"myState" : 2,
"syncingTo" : "domU-12-31-39-01-70-B4:31100",
"members" : [
{
"_id" : 0,
"name" : "domU-12-31-39-01-70-B4:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 21,
"optime" : Timestamp(1339652297000, 300),
"optimeDate" : ISODate("2012-06-14T05:38:17Z"),
"lastHeartbeat" : ISODate("2012-06-14T05:38:19Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "domU-12-31-39-01-70-B4:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 33,
"optime" : Timestamp(1339652297000, 300),
"optimeDate" : ISODate("2012-06-14T05:38:17Z"),
"errmsg" : "db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100",
"self" : true
}
],
"ok" : 1
}
Awaiting domU-12-31-39-01-70-B4:31101 to be { "ok" : true, "secondary" : true } for connection to domU-12-31-39-01-70-B4:30999 (rs: undefined)
m30999| Thu Jun 14 01:38:20 [conn] Request::process ns: admin.$cmd msg id:374 attempt: 0
m30999| Thu Jun 14 01:38:20 [conn] single query: admin.$cmd { connPoolStats: 1.0 } ntoreturn: -1 options : 0
{
"groupSlaveOk-rs0" : {
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
}
m30999| Thu Jun 14 01:38:20 [conn] Request::process ns: test.$cmd msg id:375 attempt: 0
m30999| Thu Jun 14 01:38:20 [conn] single query: test.$cmd { group: { key: { i: true }, initial: { count: 0.0 }, ns: "groupSlaveOk", $reduce: function (obj, ctx) {
m30999| ctx.count += 1;
m30999| } } } ntoreturn: -1 options : 4
m30999| Thu Jun 14 01:38:20 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:38:20 [conn] slave ':27017' is not initialized or invalid
m30999| Thu Jun 14 01:38:20 [conn] dbclient_rs getSlave groupSlaveOk-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:38:20 [conn] dbclient_rs getSlave found local secondary for queries: 1, ping time: 0
m30999| Thu Jun 14 01:38:20 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:20 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:20 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : socket exception
m31101| Thu Jun 14 01:38:20 [initandlisten] connection accepted from 10.255.119.66:45125 #8 (7 connections now open)
m30999| Thu Jun 14 01:38:20 [conn] Request::process ns: test.$cmd msg id:376 attempt: 0
m30999| Thu Jun 14 01:38:20 [conn] single query: test.$cmd { group: { key: { i: true }, initial: { count: 0.0 }, ns: "groupSlaveOk", $reduce: function (obj, ctx) {
m30999| ctx.count += 1;
m30999| } } } ntoreturn: -1 options : 0
m30999| Thu Jun 14 01:38:20 [conn] Socket recv() conn closed? 10.255.119.66:31100
m30999| Thu Jun 14 01:38:20 [conn] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:38:20 [conn] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:38:20 [conn] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: test.$cmd query: { group: { key: { i: true }, initial: { count: 0.0 }, ns: "groupSlaveOk", $reduce: function (obj, ctx) {
m30999| ctx.count += 1;
m30999| } } }
Non-slaveOk'd connection failed.
m31101| Thu Jun 14 01:38:20 [conn8] end connection 10.255.119.66:45125 (6 connections now open)
m30999| Thu Jun 14 01:38:20 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:38:20 [conn3] end connection 10.255.119.66:52883 (5 connections now open)
m29000| Thu Jun 14 01:38:20 [conn4] end connection 10.255.119.66:52888 (4 connections now open)
m29000| Thu Jun 14 01:38:20 [conn5] end connection 10.255.119.66:52889 (3 connections now open)
m29000| Thu Jun 14 01:38:20 [conn6] end connection 10.255.119.66:52901 (3 connections now open)
m31101| Thu Jun 14 01:38:20 [conn6] end connection 10.255.119.66:45117 (5 connections now open)
m31101| Thu Jun 14 01:38:21 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:38:21 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "groupSlaveOk-rs0", v: 1, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:38:21 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:38:21 [rsMgr] replSet can't see a majority, will not try to elect self
Thu Jun 14 01:38:21 shell: stopped mongo program on port 30999
Thu Jun 14 01:38:21 No db started on port: 30000
Thu Jun 14 01:38:21 shell: stopped mongo program on port 30000
ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
Thu Jun 14 01:38:21 No db started on port: 31100
Thu Jun 14 01:38:21 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:38:21 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:38:21 [interruptThread] now exiting
m31101| Thu Jun 14 01:38:21 dbexit:
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:38:21 [interruptThread] closing listening socket: 33
m31101| Thu Jun 14 01:38:21 [interruptThread] closing listening socket: 34
m31101| Thu Jun 14 01:38:21 [interruptThread] closing listening socket: 36
m31101| Thu Jun 14 01:38:21 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:38:21 [conn1] end connection 10.255.119.66:45089 (4 connections now open)
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:38:21 [interruptThread] closeAllFiles() finished
m31101| Thu Jun 14 01:38:21 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:38:21 dbexit: really exiting now
Thu Jun 14 01:38:22 shell: stopped mongo program on port 31101
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:38:22 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:38:22 [interruptThread] now exiting
m29000| Thu Jun 14 01:38:22 dbexit:
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:38:22 [interruptThread] closing listening socket: 41
m29000| Thu Jun 14 01:38:22 [interruptThread] closing listening socket: 42
m29000| Thu Jun 14 01:38:22 [interruptThread] closing listening socket: 43
m29000| Thu Jun 14 01:38:22 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:38:22 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:38:22 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:38:22 dbexit: really exiting now
Thu Jun 14 01:38:23 shell: stopped mongo program on port 29000
*** ShardingTest groupSlaveOk completed successfully in 37.035 seconds ***
37124.851942ms
Thu Jun 14 01:38:23 [initandlisten] connection accepted from 127.0.0.1:35043 #35 (22 connections now open)
*******************************************
Test : inTiming.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/inTiming.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/inTiming.js";TestData.testFile = "inTiming.js";TestData.testName = "inTiming";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:38:23 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/sharding_inqueries0'
Thu Jun 14 01:38:24 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/sharding_inqueries0
m30000| Thu Jun 14 01:38:24
m30000| Thu Jun 14 01:38:24 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:38:24
m30000| Thu Jun 14 01:38:24 [initandlisten] MongoDB starting : pid=25934 port=30000 dbpath=/data/db/sharding_inqueries0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:38:24 [initandlisten]
m30000| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:38:24 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:38:24 [initandlisten]
m30000| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:38:24 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:38:24 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:38:24 [initandlisten]
m30000| Thu Jun 14 01:38:24 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:38:24 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:38:24 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:38:24 [initandlisten] options: { dbpath: "/data/db/sharding_inqueries0", port: 30000 }
m30000| Thu Jun 14 01:38:24 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:38:24 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/sharding_inqueries1'
m30000| Thu Jun 14 01:38:24 [initandlisten] connection accepted from 127.0.0.1:56577 #1 (1 connection now open)
Thu Jun 14 01:38:24 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/sharding_inqueries1
m30001| Thu Jun 14 01:38:24
m30001| Thu Jun 14 01:38:24 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:38:24
m30001| Thu Jun 14 01:38:24 [initandlisten] MongoDB starting : pid=25947 port=30001 dbpath=/data/db/sharding_inqueries1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:38:24 [initandlisten]
m30001| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:38:24 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:38:24 [initandlisten]
m30001| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:38:24 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:38:24 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:38:24 [initandlisten]
m30001| Thu Jun 14 01:38:24 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:38:24 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:38:24 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:38:24 [initandlisten] options: { dbpath: "/data/db/sharding_inqueries1", port: 30001 }
m30001| Thu Jun 14 01:38:24 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:38:24 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/sharding_inqueries2'
m30001| Thu Jun 14 01:38:24 [initandlisten] connection accepted from 127.0.0.1:44482 #1 (1 connection now open)
Thu Jun 14 01:38:24 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30002 --dbpath /data/db/sharding_inqueries2
m30002| Thu Jun 14 01:38:24
m30002| Thu Jun 14 01:38:24 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Thu Jun 14 01:38:24
m30002| Thu Jun 14 01:38:24 [initandlisten] MongoDB starting : pid=25960 port=30002 dbpath=/data/db/sharding_inqueries2 32-bit host=domU-12-31-39-01-70-B4
m30002| Thu Jun 14 01:38:24 [initandlisten]
m30002| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Thu Jun 14 01:38:24 [initandlisten] ** Not recommended for production.
m30002| Thu Jun 14 01:38:24 [initandlisten]
m30002| Thu Jun 14 01:38:24 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Thu Jun 14 01:38:24 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Thu Jun 14 01:38:24 [initandlisten] ** with --journal, the limit is lower
m30002| Thu Jun 14 01:38:24 [initandlisten]
m30002| Thu Jun 14 01:38:24 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Thu Jun 14 01:38:24 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Thu Jun 14 01:38:24 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30002| Thu Jun 14 01:38:24 [initandlisten] options: { dbpath: "/data/db/sharding_inqueries2", port: 30002 }
m30002| Thu Jun 14 01:38:24 [initandlisten] waiting for connections on port 30002
m30002| Thu Jun 14 01:38:24 [websvr] admin web console waiting for connections on port 31002
"localhost:30000"
m30002| Thu Jun 14 01:38:24 [initandlisten] connection accepted from 127.0.0.1:45577 #1 (1 connection now open)
m30000| Thu Jun 14 01:38:24 [initandlisten] connection accepted from 127.0.0.1:56582 #2 (2 connections now open)
ShardingTest sharding_inqueries :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
Thu Jun 14 01:38:24 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:38:24 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:38:24 [FileAllocator] creating directory /data/db/sharding_inqueries0/_tmp
m30000| Thu Jun 14 01:38:24 [initandlisten] connection accepted from 127.0.0.1:56584 #3 (3 connections now open)
m30999| Thu Jun 14 01:38:24 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:38:24 [mongosMain] MongoS version 2.1.2-pre- starting: pid=25975 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:38:24 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:38:24 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:38:24 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:38:24 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/config.ns, size: 16MB, took 0.27 secs
m30000| Thu Jun 14 01:38:24 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:38:25 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/config.0, size: 16MB, took 0.304 secs
m30000| Thu Jun 14 01:38:25 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn2] insert config.settings keyUpdates:0 locks(micros) w:585818 585ms
m30000| Thu Jun 14 01:38:25 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:56587 #4 (4 connections now open)
m30000| Thu Jun 14 01:38:25 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:38:25 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:38:25 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:25 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:38:25 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:38:25 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:38:25 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:38:25 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:38:25
m30999| Thu Jun 14 01:38:25 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:38:25 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:56588 #5 (5 connections now open)
m30000| Thu Jun 14 01:38:25 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:25 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652305:1804289383' acquired, ts : 4fd978d1adc8838d53363b35
m30999| Thu Jun 14 01:38:25 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652305:1804289383' unlocked.
m30999| Thu Jun 14 01:38:25 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652305:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:38:25 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:38:25 [mongosMain] connection accepted from 127.0.0.1:53470 #1 (1 connection now open)
m30999| Thu Jun 14 01:38:25 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:38:25 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:25 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:38:25 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:38:25 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30999| Thu Jun 14 01:38:25 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30999| Thu Jun 14 01:38:25 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:38:25 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:38:25 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:38:25 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { a: 1.0, b: 1.0 } }
m30999| Thu Jun 14 01:38:25 [conn] enable sharding on: test.foo with shard key: { a: 1.0, b: 1.0 }
m30999| Thu Jun 14 01:38:25 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd978d1adc8838d53363b36
m30999| Thu Jun 14 01:38:25 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd978d1adc8838d53363b36 based on: (empty)
m30000| Thu Jun 14 01:38:25 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:38:25 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:56592 #6 (6 connections now open)
m30999| Thu Jun 14 01:38:25 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978d1adc8838d53363b34
m30999| Thu Jun 14 01:38:25 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:38:25 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978d1adc8838d53363b34
m30002| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:45587 #2 (2 connections now open)
m30001| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:44493 #2 (2 connections now open)
m30001| Thu Jun 14 01:38:25 [FileAllocator] allocating new datafile /data/db/sharding_inqueries1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:38:25 [FileAllocator] creating directory /data/db/sharding_inqueries1/_tmp
m30001| Thu Jun 14 01:38:25 [initandlisten] connection accepted from 127.0.0.1:44496 #3 (3 connections now open)
m30000| Thu Jun 14 01:38:25 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/config.1, size: 32MB, took 0.653 secs
m30001| Thu Jun 14 01:38:26 [FileAllocator] done allocating datafile /data/db/sharding_inqueries1/test.ns, size: 16MB, took 0.412 secs
m30001| Thu Jun 14 01:38:26 [FileAllocator] allocating new datafile /data/db/sharding_inqueries1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:38:26 [FileAllocator] done allocating datafile /data/db/sharding_inqueries1/test.0, size: 16MB, took 0.308 secs
m30001| Thu Jun 14 01:38:26 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:38:26 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:26 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:38:26 [conn2] build index test.foo { a: 1.0, b: 1.0 }
m30001| Thu Jun 14 01:38:26 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:26 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:7 W:72 r:261 w:1330746 1330ms
m30001| Thu Jun 14 01:38:26 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd978d1adc8838d53363b34'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:75 reslen:51 1327ms
m30001| Thu Jun 14 01:38:26 [FileAllocator] allocating new datafile /data/db/sharding_inqueries1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:38:26 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:38:26 [initandlisten] connection accepted from 127.0.0.1:56594 #7 (7 connections now open)
m30002| Thu Jun 14 01:38:26 [initandlisten] connection accepted from 127.0.0.1:45591 #3 (3 connections now open)
m30999| Thu Jun 14 01:38:26 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd978d1adc8838d53363b34
m30999| Thu Jun 14 01:38:26 [conn] resetting shard version of test.foo on localhost:30002, version is zero
Unsharded $in query ran in 21
Sharded $in query ran in 44
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:38:26 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey, b: MinKey } max: { a: MaxKey, b: MaxKey }
m30001| Thu Jun 14 01:38:26 [initandlisten] connection accepted from 127.0.0.1:44499 #4 (4 connections now open)
m30000| Thu Jun 14 01:38:26 [initandlisten] connection accepted from 127.0.0.1:56597 #8 (8 connections now open)
m30001| Thu Jun 14 01:38:26 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0, b: 1.0 }, min: { a: MinKey, b: MinKey }, max: { a: MaxKey, b: MaxKey }, from: "shard0001", splitKeys: [ { a: 1.0, b: 10.0 } ], shardId: "test.foo-a_MinKeyb_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:26 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:26 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652306:1963313074 (sleeping for 30000ms)
m30001| Thu Jun 14 01:38:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' acquired, ts : 4fd978d2eca07100be7f462d
m30001| Thu Jun 14 01:38:26 [conn4] splitChunk accepted at version 1|0||4fd978d1adc8838d53363b36
m30001| Thu Jun 14 01:38:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:26-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652306779), what: "split", ns: "test.foo", details: { before: { min: { a: MinKey, b: MinKey }, max: { a: MaxKey, b: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd978d1adc8838d53363b36') }, right: { min: { a: 1.0, b: 10.0 }, max: { a: MaxKey, b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd978d1adc8838d53363b36') } } }
m30001| Thu Jun 14 01:38:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' unlocked.
m30999| Thu Jun 14 01:38:26 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd978d1adc8838d53363b36 based on: 1|0||4fd978d1adc8838d53363b36
m30999| Thu Jun 14 01:38:26 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 1.0, b: 10.0 } max: { a: MaxKey, b: MaxKey }
m30001| Thu Jun 14 01:38:26 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0, b: 1.0 }, min: { a: 1.0, b: 10.0 }, max: { a: MaxKey, b: MaxKey }, from: "shard0001", splitKeys: [ { a: 3.0, b: 0.0 } ], shardId: "test.foo-a_1.0b_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:26 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' acquired, ts : 4fd978d2eca07100be7f462e
m30001| Thu Jun 14 01:38:26 [conn4] splitChunk accepted at version 1|2||4fd978d1adc8838d53363b36
m30001| Thu Jun 14 01:38:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:26-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652306784), what: "split", ns: "test.foo", details: { before: { min: { a: 1.0, b: 10.0 }, max: { a: MaxKey, b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 1.0, b: 10.0 }, max: { a: 3.0, b: 0.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd978d1adc8838d53363b36') }, right: { min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd978d1adc8838d53363b36') } } }
m30001| Thu Jun 14 01:38:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' unlocked.
m30999| Thu Jun 14 01:38:26 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd978d1adc8838d53363b36 based on: 1|2||4fd978d1adc8838d53363b36
m30999| Thu Jun 14 01:38:26 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { a: 1.0, b: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:38:26 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey, b: MinKey } max: { a: 1.0, b: 10.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:38:26 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKeyb_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:26 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' acquired, ts : 4fd978d2eca07100be7f462f
m30001| Thu Jun 14 01:38:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:26-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652306787), what: "moveChunk.start", ns: "test.foo", details: { min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:38:26 [conn4] moveChunk request accepted at version 1|4||4fd978d1adc8838d53363b36
m30001| Thu Jun 14 01:38:26 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:38:26 [initandlisten] connection accepted from 127.0.0.1:44501 #5 (5 connections now open)
m30000| Thu Jun 14 01:38:26 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:38:27 [FileAllocator] done allocating datafile /data/db/sharding_inqueries1/test.1, size: 32MB, took 0.809 secs
m30001| Thu Jun 14 01:38:27 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, shardKeyPattern: { a: 1, b: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:38:27 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/test.ns, size: 16MB, took 0.998 secs
m30000| Thu Jun 14 01:38:27 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/test.0, filling with zeroes...
m30000| Thu Jun 14 01:38:28 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/test.0, size: 16MB, took 0.333 secs
m30000| Thu Jun 14 01:38:28 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:38:28 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:28 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:38:28 [migrateThread] build index test.foo { a: 1.0, b: 1.0 }
m30000| Thu Jun 14 01:38:28 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:28 [FileAllocator] allocating new datafile /data/db/sharding_inqueries0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:38:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey, b: MinKey } -> { a: 1.0, b: 10.0 }
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, shardKeyPattern: { a: 1, b: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk setting version to: 2|0||4fd978d1adc8838d53363b36
m30000| Thu Jun 14 01:38:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey, b: MinKey } -> { a: 1.0, b: 10.0 }
m30000| Thu Jun 14 01:38:28 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:28-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652308806), what: "moveChunk.to", ns: "test.foo", details: { min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, step1 of 5: 1368, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 649 } }
m30000| Thu Jun 14 01:38:28 [initandlisten] connection accepted from 127.0.0.1:56599 #9 (9 connections now open)
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, shardKeyPattern: { a: 1, b: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk updating self version to: 2|1||4fd978d1adc8838d53363b36 through { a: 1.0, b: 10.0 } -> { a: 3.0, b: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:38:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:28-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652308811), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:38:28 [conn4] doing delete inline
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:38:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' unlocked.
m30001| Thu Jun 14 01:38:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:28-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652308811), what: "moveChunk.from", ns: "test.foo", details: { min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:38:28 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey, b: MinKey }, max: { a: 1.0, b: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKeyb_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:79 w:61 reslen:37 2025ms
m30999| Thu Jun 14 01:38:28 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 2|1||4fd978d1adc8838d53363b36 based on: 1|4||4fd978d1adc8838d53363b36
m30999| Thu Jun 14 01:38:28 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { a: 1.0, b: 15.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:38:28 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { a: 3.0, b: 15.0 }, to: "shard0002" }
m30999| Thu Jun 14 01:38:28 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 3.0, b: 0.0 } max: { a: MaxKey, b: MaxKey }) shard0001:localhost:30001 -> shard0002:localhost:30002
m30001| Thu Jun 14 01:38:28 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_3.0b_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:28 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' acquired, ts : 4fd978d4eca07100be7f4630
m30001| Thu Jun 14 01:38:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:28-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652308815), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk request accepted at version 2|1||4fd978d1adc8838d53363b36
m30001| Thu Jun 14 01:38:28 [conn4] moveChunk number of documents: 0
m30002| Thu Jun 14 01:38:28 [initandlisten] connection accepted from 127.0.0.1:45596 #4 (4 connections now open)
m30001| Thu Jun 14 01:38:28 [initandlisten] connection accepted from 127.0.0.1:44504 #6 (6 connections now open)
m30002| Thu Jun 14 01:38:28 [FileAllocator] allocating new datafile /data/db/sharding_inqueries2/test.ns, filling with zeroes...
m30002| Thu Jun 14 01:38:28 [FileAllocator] creating directory /data/db/sharding_inqueries2/_tmp
m30000| Thu Jun 14 01:38:28 [FileAllocator] done allocating datafile /data/db/sharding_inqueries0/test.1, size: 32MB, took 0.694 secs
m30002| Thu Jun 14 01:38:29 [FileAllocator] done allocating datafile /data/db/sharding_inqueries2/test.ns, size: 16MB, took 0.373 secs
m30002| Thu Jun 14 01:38:29 [FileAllocator] allocating new datafile /data/db/sharding_inqueries2/test.0, filling with zeroes...
m30002| Thu Jun 14 01:38:29 [FileAllocator] done allocating datafile /data/db/sharding_inqueries2/test.0, size: 16MB, took 0.349 secs
m30002| Thu Jun 14 01:38:29 [migrateThread] build index test.foo { _id: 1 }
m30002| Thu Jun 14 01:38:29 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:38:29 [migrateThread] info: creating collection test.foo on add index
m30002| Thu Jun 14 01:38:29 [migrateThread] build index test.foo { a: 1.0, b: 1.0 }
m30002| Thu Jun 14 01:38:29 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Thu Jun 14 01:38:29 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 3.0, b: 0.0 } -> { a: MaxKey, b: MaxKey }
m30002| Thu Jun 14 01:38:29 [FileAllocator] allocating new datafile /data/db/sharding_inqueries2/test.1, filling with zeroes...
m30001| Thu Jun 14 01:38:29 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, shardKeyPattern: { a: 1, b: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:38:29 [conn4] moveChunk setting version to: 3|0||4fd978d1adc8838d53363b36
m30002| Thu Jun 14 01:38:29 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 3.0, b: 0.0 } -> { a: MaxKey, b: MaxKey }
m30002| Thu Jun 14 01:38:29 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:29-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652309822), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, step1 of 5: 767, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 237 } }
m30000| Thu Jun 14 01:38:29 [initandlisten] connection accepted from 127.0.0.1:56602 #10 (10 connections now open)
m30001| Thu Jun 14 01:38:29 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, shardKeyPattern: { a: 1, b: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:38:29 [conn4] moveChunk updating self version to: 3|1||4fd978d1adc8838d53363b36 through { a: 1.0, b: 10.0 } -> { a: 3.0, b: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:38:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:29-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652309827), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, from: "shard0001", to: "shard0002" } }
m30001| Thu Jun 14 01:38:29 [conn4] doing delete inline
m30001| Thu Jun 14 01:38:29 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:38:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652306:1963313074' unlocked.
m30001| Thu Jun 14 01:38:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:29-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44499", time: new Date(1339652309830), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 8, step6 of 6: 0 } }
m30001| Thu Jun 14 01:38:29 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { a: 3.0, b: 0.0 }, max: { a: MaxKey, b: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_3.0b_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:144 w:107 reslen:37 1016ms
m30999| Thu Jun 14 01:38:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 3|1||4fd978d1adc8838d53363b36 based on: 2|1||4fd978d1adc8838d53363b36
m30000| Thu Jun 14 01:38:29 [conn6] no current chunk manager found for this shard, will initialize
m30002| Thu Jun 14 01:38:29 [conn3] no current chunk manager found for this shard, will initialize
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 1
shard0001 1
shard0002 1
{ "a" : { $minKey : 1 }, "b" : { $minKey : 1 } } -->> { "a" : 1, "b" : 10 } on : shard0000 Timestamp(2000, 0)
{ "a" : 1, "b" : 10 } -->> { "a" : 3, "b" : 0 } on : shard0001 Timestamp(3000, 1)
{ "a" : 3, "b" : 0 } -->> { "a" : { $maxKey : 1 }, "b" : { $maxKey : 1 } } on : shard0002 Timestamp(3000, 0)
m30999| Thu Jun 14 01:38:29 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30002| Thu Jun 14 01:38:29 [conn3] end connection 127.0.0.1:45591 (3 connections now open)
m30000| Thu Jun 14 01:38:29 [conn3] end connection 127.0.0.1:56584 (9 connections now open)
m30000| Thu Jun 14 01:38:29 [conn5] end connection 127.0.0.1:56588 (8 connections now open)
m30000| Thu Jun 14 01:38:29 [conn6] end connection 127.0.0.1:56592 (7 connections now open)
m30001| Thu Jun 14 01:38:29 [conn3] end connection 127.0.0.1:44496 (5 connections now open)
m30001| Thu Jun 14 01:38:29 [conn4] end connection 127.0.0.1:44499 (4 connections now open)
m30002| Thu Jun 14 01:38:30 [FileAllocator] done allocating datafile /data/db/sharding_inqueries2/test.1, size: 32MB, took 0.705 secs
Thu Jun 14 01:38:30 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:38:30 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:38:30 [interruptThread] now exiting
m30000| Thu Jun 14 01:38:30 dbexit:
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:38:30 [interruptThread] closing listening socket: 31
m30000| Thu Jun 14 01:38:30 [interruptThread] closing listening socket: 32
m30000| Thu Jun 14 01:38:30 [interruptThread] closing listening socket: 33
m30000| Thu Jun 14 01:38:30 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:38:30 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:38:30 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:38:30 dbexit: really exiting now
m30001| Thu Jun 14 01:38:30 [conn5] end connection 127.0.0.1:44501 (3 connections now open)
Thu Jun 14 01:38:31 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:38:31 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:38:31 [interruptThread] now exiting
m30001| Thu Jun 14 01:38:31 dbexit:
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:38:31 [interruptThread] closing listening socket: 34
m30001| Thu Jun 14 01:38:31 [interruptThread] closing listening socket: 35
m30001| Thu Jun 14 01:38:31 [interruptThread] closing listening socket: 36
m30001| Thu Jun 14 01:38:31 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:38:31 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:38:31 [conn4] end connection 127.0.0.1:45596 (2 connections now open)
m30001| Thu Jun 14 01:38:31 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:38:31 dbexit: really exiting now
Thu Jun 14 01:38:32 shell: stopped mongo program on port 30001
m30002| Thu Jun 14 01:38:32 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Thu Jun 14 01:38:32 [interruptThread] now exiting
m30002| Thu Jun 14 01:38:32 dbexit:
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: going to close listening sockets...
m30002| Thu Jun 14 01:38:32 [interruptThread] closing listening socket: 37
m30002| Thu Jun 14 01:38:32 [interruptThread] closing listening socket: 38
m30002| Thu Jun 14 01:38:32 [interruptThread] closing listening socket: 39
m30002| Thu Jun 14 01:38:32 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: going to flush diaglog...
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: going to close sockets...
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: closing all files...
m30002| Thu Jun 14 01:38:32 [interruptThread] closeAllFiles() finished
m30002| Thu Jun 14 01:38:32 [interruptThread] shutdown: removing fs lock...
m30002| Thu Jun 14 01:38:32 dbexit: really exiting now
Thu Jun 14 01:38:33 shell: stopped mongo program on port 30002
*** ShardingTest sharding_inqueries completed successfully in 9.926 seconds ***
10003.057957ms
Thu Jun 14 01:38:33 [initandlisten] connection accepted from 127.0.0.1:35071 #36 (23 connections now open)
*******************************************
Test : index1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/index1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/index1.js";TestData.testFile = "index1.js";TestData.testName = "index1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:38:33 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/shard_index0'
Thu Jun 14 01:38:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/shard_index0
m30000| Thu Jun 14 01:38:34
m30000| Thu Jun 14 01:38:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:38:34
m30000| Thu Jun 14 01:38:34 [initandlisten] MongoDB starting : pid=26031 port=30000 dbpath=/data/db/shard_index0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:38:34 [initandlisten]
m30000| Thu Jun 14 01:38:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:38:34 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:38:34 [initandlisten]
m30000| Thu Jun 14 01:38:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:38:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:38:34 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:38:34 [initandlisten]
m30000| Thu Jun 14 01:38:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:38:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:38:34 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:38:34 [initandlisten] options: { dbpath: "/data/db/shard_index0", port: 30000 }
m30000| Thu Jun 14 01:38:34 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:38:34 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/shard_index1'
m30000| Thu Jun 14 01:38:34 [initandlisten] connection accepted from 127.0.0.1:56605 #1 (1 connection now open)
Thu Jun 14 01:38:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/shard_index1
m30001| Thu Jun 14 01:38:34
m30001| Thu Jun 14 01:38:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:38:34
m30001| Thu Jun 14 01:38:34 [initandlisten] MongoDB starting : pid=26044 port=30001 dbpath=/data/db/shard_index1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:38:34 [initandlisten]
m30001| Thu Jun 14 01:38:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:38:34 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:38:34 [initandlisten]
m30001| Thu Jun 14 01:38:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:38:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:38:34 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:38:34 [initandlisten]
m30001| Thu Jun 14 01:38:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:38:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:38:34 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:38:34 [initandlisten] options: { dbpath: "/data/db/shard_index1", port: 30001 }
m30001| Thu Jun 14 01:38:34 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:38:34 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:38:34 [initandlisten] connection accepted from 127.0.0.1:44510 #1 (1 connection now open)
ShardingTest shard_index :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:38:34 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Thu Jun 14 01:38:34 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:38:34 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26058 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30000| Thu Jun 14 01:38:34 [initandlisten] connection accepted from 127.0.0.1:56608 #2 (2 connections now open)
m30000| Thu Jun 14 01:38:34 [FileAllocator] allocating new datafile /data/db/shard_index0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:38:34 [FileAllocator] creating directory /data/db/shard_index0/_tmp
m30999| Thu Jun 14 01:38:34 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:38:34 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:38:34 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:38:34 [initandlisten] connection accepted from 127.0.0.1:56610 #3 (3 connections now open)
m30000| Thu Jun 14 01:38:34 [FileAllocator] done allocating datafile /data/db/shard_index0/config.ns, size: 16MB, took 0.289 secs
m30000| Thu Jun 14 01:38:34 [FileAllocator] allocating new datafile /data/db/shard_index0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:38:35 [FileAllocator] done allocating datafile /data/db/shard_index0/config.0, size: 16MB, took 0.285 secs
m30000| Thu Jun 14 01:38:35 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn2] insert config.settings keyUpdates:0 locks(micros) w:585821 585ms
m30000| Thu Jun 14 01:38:35 [FileAllocator] allocating new datafile /data/db/shard_index0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:56613 #4 (4 connections now open)
m30000| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:56614 #5 (5 connections now open)
m30000| Thu Jun 14 01:38:35 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:38:35 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:38:35 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:38:35 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:35 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:38:35 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:38:35 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:38:35 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:38:35 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:38:35
m30999| Thu Jun 14 01:38:35 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:38:35 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:56615 #6 (6 connections now open)
m30999| Thu Jun 14 01:38:35 [mongosMain] connection accepted from 127.0.0.1:53497 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:38:35 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:56617 #7 (7 connections now open)
m30000| Thu Jun 14 01:38:35 [conn5] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:35 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:38:35 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:38:35 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:38:35 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:38:35 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:38:35 [conn] DROP: test.foo0
m30999| Thu Jun 14 01:38:35 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978db0828a565459fac07
Test # 0
m30000| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:56620 #8 (8 connections now open)
m30999| Thu Jun 14 01:38:35 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978db0828a565459fac07
m30999| Thu Jun 14 01:38:35 [conn] enabling sharding on: test
m30001| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:44521 #2 (2 connections now open)
m30001| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:44522 #3 (3 connections now open)
m30001| Thu Jun 14 01:38:35 [conn3] CMD: drop test.foo0
m30001| Thu Jun 14 01:38:35 [FileAllocator] allocating new datafile /data/db/shard_index1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:38:35 [FileAllocator] creating directory /data/db/shard_index1/_tmp
m30001| Thu Jun 14 01:38:35 [initandlisten] connection accepted from 127.0.0.1:44524 #4 (4 connections now open)
m30000| Thu Jun 14 01:38:35 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn5] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:38:35 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:35 [conn5] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:38:35 [conn5] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:38:35 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652315:1804289383' acquired, ts : 4fd978db0828a565459fac08
m30999| Thu Jun 14 01:38:35 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652315:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:38:35 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652315:1804289383' unlocked.
m30000| Thu Jun 14 01:38:35 [FileAllocator] done allocating datafile /data/db/shard_index0/config.1, size: 32MB, took 0.628 secs
m30001| Thu Jun 14 01:38:36 [FileAllocator] done allocating datafile /data/db/shard_index1/test.ns, size: 16MB, took 0.377 secs
m30001| Thu Jun 14 01:38:36 [FileAllocator] allocating new datafile /data/db/shard_index1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:38:36 [FileAllocator] done allocating datafile /data/db/shard_index1/test.0, size: 16MB, took 0.306 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo0 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:36 [conn3] insert test.foo0 keyUpdates:0 locks(micros) W:69 w:1270137 1269ms
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo0 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo0 { x: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [FileAllocator] allocating new datafile /data/db/shard_index1/test.1, filling with zeroes...
command { "shardcollection" : "test.foo0", "key" : { "x" : 1 } } failed: {
"ok" : 0,
"errmsg" : "can't shard collection 'test.foo0' with unique index on: { v: 1, key: { num: 1.0 }, unique: true, ns: \"test.foo0\", name: \"num_1\" }, uniqueness can't be maintained across unless shard key index is a prefix"
}
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo1
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo1
Test # 1
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo1 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 2
Test # 3
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo1 { x: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo1 { x: 1.0, num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo2
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo2 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo2 { x: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo2 { x: 1.0, num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.003 secs
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo3
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo3 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo3 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo1", key: { x: 1.0 } }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo1 with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo1 using new epoch 4fd978dc0828a565459fac09
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo1: 0ms sequenceNumber: 2 version: 1|0||4fd978dc0828a565459fac09 based on: (empty)
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo2
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo2", key: { x: 1.0 } }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo2 with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo2 using new epoch 4fd978dc0828a565459fac0a
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo2: 0ms sequenceNumber: 3 version: 1|0||4fd978dc0828a565459fac0a based on: (empty)
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo3
m30000| Thu Jun 14 01:38:36 [conn5] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:38:36 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:36 [initandlisten] connection accepted from 127.0.0.1:56622 #9 (9 connections now open)
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo3 { num: 1.0, x: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo3", key: { num: 1.0 }, unique: true }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo3 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo3 using new epoch 4fd978dc0828a565459fac0b
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo3: 0ms sequenceNumber: 4 version: 1|0||4fd978dc0828a565459fac0b based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo4
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo4
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo4 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 4
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo4 { _id: 1.0, num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo4", key: { _id: 1.0 }, unique: true }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo4 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo4 using new epoch 4fd978dc0828a565459fac0c
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo4: 0ms sequenceNumber: 5 version: 1|0||4fd978dc0828a565459fac0c based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo5
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo5
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo5 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 5
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo5 { _id: 1.0, num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo5", key: { _id: 1.0, num: 1.0 }, unique: true }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo5 with shard key: { _id: 1.0, num: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo5 using new epoch 4fd978dc0828a565459fac0d
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo5: 0ms sequenceNumber: 6 version: 1|0||4fd978dc0828a565459fac0d based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo6
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo6
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo6 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 6
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo6 { num: 1.0, _id: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo6", key: { num: 1.0 }, unique: true }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo6 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo6 using new epoch 4fd978dc0828a565459fac0e
m30001| Thu Jun 14 01:38:36 [conn4] build index test.foo6 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo6: 0ms sequenceNumber: 7 version: 1|0||4fd978dc0828a565459fac0e based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo6",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"num" : 1,
"_id" : 1
},
"unique" : true,
"ns" : "test.foo6",
"name" : "num_1__id_1"
},
{
"v" : 1,
"key" : {
"num" : 1
},
"unique" : true,
"ns" : "test.foo6",
"name" : "num_1"
}
]
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo7
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo7
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo7 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 7
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo7", key: { num: 1.0 } }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo7 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo7 using new epoch 4fd978dc0828a565459fac0f
m30001| Thu Jun 14 01:38:36 [conn4] build index test.foo7 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo7: 0ms sequenceNumber: 8 version: 1|0||4fd978dc0828a565459fac0f based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo8
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo8
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo8 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 8
m30999| Thu Jun 14 01:38:36 [conn] CMD: shardcollection: { shardcollection: "test.foo8", key: { num: 1.0 }, unique: true }
m30999| Thu Jun 14 01:38:36 [conn] enable sharding on: test.foo8 with shard key: { num: 1.0 }
m30999| Thu Jun 14 01:38:36 [conn] going to create 1 chunk(s) for: test.foo8 using new epoch 4fd978dc0828a565459fac10
m30001| Thu Jun 14 01:38:36 [conn4] build index test.foo8 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:36 [conn] ChunkManager: time to load chunks for test.foo8: 0ms sequenceNumber: 9 version: 1|0||4fd978dc0828a565459fac10 based on: (empty)
m30001| Thu Jun 14 01:38:36 [conn3] no current chunk manager found for this shard, will initialize
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.foo8",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"num" : 1
},
"unique" : true,
"ns" : "test.foo8",
"name" : "num_1"
}
]
m30999| Thu Jun 14 01:38:36 [conn] DROP: test.foo9
m30001| Thu Jun 14 01:38:36 [conn3] CMD: drop test.foo9
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo9 { _id: 1 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 0 total records. 0 secs
Test # 9
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo9 { num: 1.0 }
m30001| Thu Jun 14 01:38:36 [conn3] build index done. scanned 300 total records. 0.001 secs
m30001| Thu Jun 14 01:38:36 [conn3] build index test.foo9 { x: 1.0 }
command { "shardcollection" : "test.foo9", "key" : { "x" : 1 } } failed: {
"ok" : 0,
"errmsg" : "can't shard collection 'test.foo9' with unique index on: { v: 1, key: { num: 1.0 }, unique: true, ns: \"test.foo9\", name: \"num_1\" }, uniqueness can't be maintained across unless shard key index is a prefix"
}
m30999| Thu Jun 14 01:38:36 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:38:36 [conn3] end connection 127.0.0.1:56610 (8 connections now open)
m30000| Thu Jun 14 01:38:36 [conn5] end connection 127.0.0.1:56614 (7 connections now open)
m30000| Thu Jun 14 01:38:36 [conn4] end connection 127.0.0.1:56613 (6 connections now open)
m30000| Thu Jun 14 01:38:36 [conn6] end connection 127.0.0.1:56615 (5 connections now open)
m30001| Thu Jun 14 01:38:36 [conn3] end connection 127.0.0.1:44522 (3 connections now open)
m30000| Thu Jun 14 01:38:36 [conn8] end connection 127.0.0.1:56620 (4 connections now open)
m30001| Thu Jun 14 01:38:36 [conn4] end connection 127.0.0.1:44524 (2 connections now open)
m30001| Thu Jun 14 01:38:37 [FileAllocator] done allocating datafile /data/db/shard_index1/test.1, size: 32MB, took 0.707 secs
Thu Jun 14 01:38:37 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:38:37 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:38:37 [interruptThread] now exiting
m30000| Thu Jun 14 01:38:37 dbexit:
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:38:37 [interruptThread] closing listening socket: 32
m30000| Thu Jun 14 01:38:37 [interruptThread] closing listening socket: 33
m30000| Thu Jun 14 01:38:37 [interruptThread] closing listening socket: 34
m30000| Thu Jun 14 01:38:37 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:38:37 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:38:37 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:38:37 dbexit: really exiting now
Thu Jun 14 01:38:38 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:38:38 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:38:38 [interruptThread] now exiting
m30001| Thu Jun 14 01:38:38 dbexit:
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:38:38 [interruptThread] closing listening socket: 35
m30001| Thu Jun 14 01:38:38 [interruptThread] closing listening socket: 36
m30001| Thu Jun 14 01:38:38 [interruptThread] closing listening socket: 37
m30001| Thu Jun 14 01:38:38 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:38:38 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:38:38 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:38:38 dbexit: really exiting now
Thu Jun 14 01:38:39 shell: stopped mongo program on port 30001
*** ShardingTest shard_index completed successfully in 5.897 seconds ***
5945.659161ms
Thu Jun 14 01:38:39 [initandlisten] connection accepted from 127.0.0.1:35091 #37 (24 connections now open)
*******************************************
Test : inserts_consistent.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/inserts_consistent.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/inserts_consistent.js";TestData.testFile = "inserts_consistent.js";TestData.testName = "inserts_consistent";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:38:39 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:38:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:38:40
m30000| Thu Jun 14 01:38:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:38:40
m30000| Thu Jun 14 01:38:40 [initandlisten] MongoDB starting : pid=26098 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:38:40 [initandlisten]
m30000| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:38:40 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:38:40 [initandlisten]
m30000| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:38:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:38:40 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:38:40 [initandlisten]
m30000| Thu Jun 14 01:38:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:38:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:38:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:38:40 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:38:40 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:38:40 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:38:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:38:40 [initandlisten] connection accepted from 127.0.0.1:56625 #1 (1 connection now open)
m30001| Thu Jun 14 01:38:40
m30001| Thu Jun 14 01:38:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:38:40
m30001| Thu Jun 14 01:38:40 [initandlisten] MongoDB starting : pid=26111 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:38:40 [initandlisten]
m30001| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:38:40 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:38:40 [initandlisten]
m30001| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:38:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:38:40 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:38:40 [initandlisten]
m30001| Thu Jun 14 01:38:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:38:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:38:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:38:40 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:38:40 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:38:40 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:38:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m30001| Thu Jun 14 01:38:40 [initandlisten] connection accepted from 127.0.0.1:44530 #1 (1 connection now open)
m29000| Thu Jun 14 01:38:40
m29000| Thu Jun 14 01:38:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:38:40
m29000| Thu Jun 14 01:38:40 [initandlisten] MongoDB starting : pid=26123 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:38:40 [initandlisten]
m29000| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:38:40 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:38:40 [initandlisten]
m29000| Thu Jun 14 01:38:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:38:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:38:40 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:38:40 [initandlisten]
m29000| Thu Jun 14 01:38:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:38:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:38:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:38:40 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:38:40 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:38:40 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:38:40 [websvr] ERROR: addr already in use
m29000| Thu Jun 14 01:38:40 [initandlisten] connection accepted from 127.0.0.1:54863 #1 (1 connection now open)
"localhost:29000"
m29000| Thu Jun 14 01:38:40 [initandlisten] connection accepted from 127.0.0.1:54864 #2 (2 connections now open)
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m29000| Thu Jun 14 01:38:40 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:38:40 [FileAllocator] creating directory /data/db/test-config0/_tmp
Thu Jun 14 01:38:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000 -vv
m30999| Thu Jun 14 01:38:40 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:38:40 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26139 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:38:40 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:38:40 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:38:40 [mongosMain] options: { configdb: "localhost:29000", port: 30999, vv: true }
m30999| Thu Jun 14 01:38:40 [mongosMain] config string : localhost:29000
m30999| Thu Jun 14 01:38:40 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:38:40 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:40 [initandlisten] connection accepted from 127.0.0.1:54866 #3 (3 connections now open)
m30999| Thu Jun 14 01:38:40 [mongosMain] connected connection!
m29000| Thu Jun 14 01:38:40 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.306 secs
m29000| Thu Jun 14 01:38:40 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:38:41 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.298 secs
m29000| Thu Jun 14 01:38:41 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn2] insert config.settings keyUpdates:0 locks(micros) w:625968 625ms
m29000| Thu Jun 14 01:38:41 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m30999| Thu Jun 14 01:38:41 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54870 #4 (4 connections now open)
m30999| Thu Jun 14 01:38:41 [mongosMain] connected connection!
m29000| Thu Jun 14 01:38:41 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [mongosMain] MaxChunkSize: 50
m29000| Thu Jun 14 01:38:41 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:38:41 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:38:41 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:38:41 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:38:41 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:38:41 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:38:41 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:38:41
m30999| Thu Jun 14 01:38:41 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:38:41 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [Balancer] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54871 #5 (5 connections now open)
m30999| Thu Jun 14 01:38:41 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:38:41 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:38:41 [Balancer] connected connection!
m30999| Thu Jun 14 01:38:41 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: 0
m30999| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: -1
m30999| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: 0
m30999| Thu Jun 14 01:38:41 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30999| Thu Jun 14 01:38:41 [Balancer] inserting initial doc in config.locks for lock balancer
m29000| Thu Jun 14 01:38:41 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652321:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652321:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652321:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:38:41 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd978e1a4ce1fb93eb00539" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:38:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652321:1804289383' acquired, ts : 4fd978e1a4ce1fb93eb00539
m30999| Thu Jun 14 01:38:41 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:38:41 [Balancer] no collections to balance
m30999| Thu Jun 14 01:38:41 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:38:41 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:38:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652321:1804289383' unlocked.
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:38:41 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339652321:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:38:41 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:38:41 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:38:41 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30999:1339652321:1804289383', sleeping for 30000ms
Thu Jun 14 01:38:41 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:29000 -vv
m30999| Thu Jun 14 01:38:41 [mongosMain] connection accepted from 127.0.0.1:53519 #1 (1 connection now open)
m30998| Thu Jun 14 01:38:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:38:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26158 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:38:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:38:41 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:38:41 [mongosMain] options: { configdb: "localhost:29000", port: 30998, vv: true }
m30998| Thu Jun 14 01:38:41 [mongosMain] config string : localhost:29000
m30998| Thu Jun 14 01:38:41 [mongosMain] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54874 #6 (6 connections now open)
m30998| Thu Jun 14 01:38:41 [mongosMain] connected connection!
m30998| Thu Jun 14 01:38:41 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:38:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:38:41 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:38:41 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:38:41 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:38:41 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:38:41 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54875 #7 (7 connections now open)
m30998| Thu Jun 14 01:38:41 [Balancer] connected connection!
m30998| Thu Jun 14 01:38:41 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:38:41 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:38:41
m30998| Thu Jun 14 01:38:41 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:38:41 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54876 #8 (8 connections now open)
m30998| Thu Jun 14 01:38:41 [Balancer] connected connection!
m30998| Thu Jun 14 01:38:41 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: 0
m30998| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: -1
m30998| Thu Jun 14 01:38:41 [Balancer] skew from remote server localhost:29000 found: 0
m30998| Thu Jun 14 01:38:41 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30998| Thu Jun 14 01:38:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652321:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339652321:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339652321:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:38:41 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd978e14f0e417772ff3687" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd978e1a4ce1fb93eb00539" } }
m30998| Thu Jun 14 01:38:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652321:1804289383' acquired, ts : 4fd978e14f0e417772ff3687
m30998| Thu Jun 14 01:38:41 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:38:41 [Balancer] no collections to balance
m30998| Thu Jun 14 01:38:41 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:38:41 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:38:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652321:1804289383' unlocked.
m30998| Thu Jun 14 01:38:41 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30998:1339652321:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:38:41 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:38:41 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30998:1339652321:1804289383', sleeping for 30000ms
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:38:41 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:38:41 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [conn] put [admin] on: config:localhost:29000
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30998| Thu Jun 14 01:38:41 [mongosMain] connection accepted from 127.0.0.1:45123 #1 (1 connection now open)
m30000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:56644 #2 (2 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:44548 #2 (2 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
----
Doing test setup...
----
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:56646 #3 (3 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978e1a4ce1fb93eb00538
m30999| Thu Jun 14 01:38:41 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:38:41 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), authoritative: true }
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:44550 #3 (3 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978e1a4ce1fb93eb00538
m30999| Thu Jun 14 01:38:41 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:38:41 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), authoritative: true }
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd978e1a4ce1fb93eb00538
m30999| Thu Jun 14 01:38:41 [conn] initializing shard connection to localhost:29000
m30999| Thu Jun 14 01:38:41 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), authoritative: true }
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: WriteBackListener-localhost:29000
m30999| Thu Jun 14 01:38:41 [WriteBackListener-localhost:29000] localhost:29000 is not a shard node
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.mongos", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m29000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:54882 #9 (9 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "domU-12-31-39-01-70-B4:30999", ping: new Date(1339652321236), up: 0, waiting: true }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
Waiting for active hosts...
Waiting for the balancer lock...
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.locks", n2skip: 0, n2return: -1, options: 0, query: { _id: "balancer" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "balancer", process: "domU-12-31-39-01-70-B4:30998:1339652321:1804289383", state: 0, ts: ObjectId('4fd978e14f0e417772ff3687'), when: new Date(1339652321419), who: "domU-12-31-39-01-70-B4:30998:1339652321:1804289383:Balancer:846930886", why: "doing balance round" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
Waiting again for active hosts after balancer is off...
m30999| Thu Jun 14 01:38:41 [conn] couldn't find database [inserts_consistent] in config db
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:56649 #4 (4 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:41 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:38:41 [initandlisten] connection accepted from 127.0.0.1:44553 #4 (4 connections now open)
m30999| Thu Jun 14 01:38:41 [conn] connected connection!
m30999| Thu Jun 14 01:38:41 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:38:41 [conn] put [inserts_consistent] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] creating pcursor over QSpec { ns: "inserts_consistent.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "coll", query: {} }, fields: {} } and CInfo { v_ns: "inserts_consistent.coll", filter: {} }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ shard0000:localhost:30000]
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "shard0000:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "shard0000:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "shard0000:localhost:30000", cursor: { missing: true, n: 0.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: -1, options: 0, query: { _id: "inserts_consistent" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:41 [conn] enabling sharding on: inserts_consistent
m30999| Thu Jun 14 01:38:41 [conn] CMD: shardcollection: { shardcollection: "inserts_consistent.coll", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:38:41 [conn] enable sharding on: inserts_consistent.coll with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:38:41 [conn] going to create 1 chunk(s) for: inserts_consistent.coll using new epoch 4fd978e1a4ce1fb93eb0053a
m30000| Thu Jun 14 01:38:41 [FileAllocator] allocating new datafile /data/db/test0/inserts_consistent.ns, filling with zeroes...
m30000| Thu Jun 14 01:38:41 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Thu Jun 14 01:38:41 [conn] loaded 1 chunks into new chunk manager for inserts_consistent.coll with version 1|0||4fd978e1a4ce1fb93eb0053a
m30999| Thu Jun 14 01:38:41 [conn] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 2 version: 1|0||4fd978e1a4ce1fb93eb0053a based on: (empty)
m29000| Thu Jun 14 01:38:41 [conn3] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:38:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:38:41 [conn] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 0 current: 2 version: 1|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f6bdf8
m30999| Thu Jun 14 01:38:41 [conn] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f66148
m29000| Thu Jun 14 01:38:41 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.576 secs
m30000| Thu Jun 14 01:38:42 [FileAllocator] done allocating datafile /data/db/test0/inserts_consistent.ns, size: 16MB, took 0.289 secs
m30000| Thu Jun 14 01:38:42 [FileAllocator] allocating new datafile /data/db/test0/inserts_consistent.0, filling with zeroes...
m30000| Thu Jun 14 01:38:42 [FileAllocator] done allocating datafile /data/db/test0/inserts_consistent.0, size: 16MB, took 0.333 secs
m30000| Thu Jun 14 01:38:42 [conn4] build index inserts_consistent.coll { _id: 1 }
m30000| Thu Jun 14 01:38:42 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:42 [conn4] info: creating collection inserts_consistent.coll on add index
m30000| Thu Jun 14 01:38:42 [conn4] insert inserts_consistent.system.indexes keyUpdates:0 locks(micros) r:248 w:793898 793ms
m30000| Thu Jun 14 01:38:42 [conn3] command admin.$cmd command: { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:158 r:23 reslen:203 792ms
m30999| Thu Jun 14 01:38:42 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "inserts_consistent.coll", need_authoritative: true, errmsg: "first time for collection 'inserts_consistent.coll'", ok: 0.0 }
m30000| Thu Jun 14 01:38:42 [FileAllocator] allocating new datafile /data/db/test0/inserts_consistent.1, filling with zeroes...
m30999| Thu Jun 14 01:38:42 [conn] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 0 current: 2 version: 1|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f6bdf8
m30999| Thu Jun 14 01:38:42 [conn] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8f66148
m30000| Thu Jun 14 01:38:42 [conn3] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:54885 #10 (10 connections now open)
m30999| Thu Jun 14 01:38:42 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:38:42 [conn] splitting: inserts_consistent.coll shard: ns:inserts_consistent.coll at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m29000| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:54886 #11 (11 connections now open)
m30000| Thu Jun 14 01:38:42 [conn4] received splitChunk request: { splitChunk: "inserts_consistent.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "inserts_consistent.coll-_id_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:38:42 [conn4] created new distributed lock for inserts_consistent.coll on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:38:42 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30000:1339652322:1548992921 (sleeping for 30000ms)
m30000| Thu Jun 14 01:38:42 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' acquired, ts : 4fd978e26338e542b9560d06
m30000| Thu Jun 14 01:38:42 [conn4] splitChunk accepted at version 1|0||4fd978e1a4ce1fb93eb0053a
m30000| Thu Jun 14 01:38:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:42-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652322430), what: "split", ns: "inserts_consistent.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a') } } }
m30000| Thu Jun 14 01:38:42 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' unlocked.
m30999| Thu Jun 14 01:38:42 [conn] loading chunk manager for collection inserts_consistent.coll using old chunk manager w/ version 1|0||4fd978e1a4ce1fb93eb0053a and 1 chunks
m30999| Thu Jun 14 01:38:42 [conn] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 1|2||4fd978e1a4ce1fb93eb0053a
m30999| Thu Jun 14 01:38:42 [conn] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 3 version: 1|2||4fd978e1a4ce1fb93eb0053a based on: 1|0||4fd978e1a4ce1fb93eb0053a
----
Refreshing second mongos...
----
m30998| Thu Jun 14 01:38:42 [conn] DBConfig unserialize: inserts_consistent { _id: "inserts_consistent", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:38:42 [conn] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 1|2||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:42 [conn] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 2 version: 1|2||4fd978e1a4ce1fb93eb0053a based on: (empty)
m30998| Thu Jun 14 01:38:42 [conn] found 0 dropped collections and 1 sharded collections for database inserts_consistent
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] creating pcursor over QSpec { ns: "inserts_consistent.coll", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] initializing over 1 shards required by [inserts_consistent.coll @ 1|2||4fd978e1a4ce1fb93eb0053a]
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:38:42 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:42 [conn] connected connection!
m30998| Thu Jun 14 01:38:42 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978e14f0e417772ff3686
m30000| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:56653 #5 (5 connections now open)
m30998| Thu Jun 14 01:38:42 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:38:42 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e14f0e417772ff3686'), authoritative: true }
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:38:42 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:38:42 [conn] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 0 current: 2 version: 1|2||4fd978e1a4ce1fb93eb0053a manager: 0x8f344f0
m30998| Thu Jun 14 01:38:42 [conn] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f35118
m30998| Thu Jun 14 01:38:42 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:38:42 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:42 [conn] connected connection!
m30998| Thu Jun 14 01:38:42 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978e14f0e417772ff3686
m30998| Thu Jun 14 01:38:42 [conn] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:38:42 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e14f0e417772ff3686'), authoritative: true }
m30998| Thu Jun 14 01:38:42 [conn] resetting shard version of inserts_consistent.coll on localhost:30001, version is zero
m30998| Thu Jun 14 01:38:42 [conn] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x8f344f0
m30998| Thu Jun 14 01:38:42 [conn] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f36d60
m30998| Thu Jun 14 01:38:42 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "inserts_consistent.coll @ 1|2||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] finishing over 1 shards
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "inserts_consistent.coll @ 1|2||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
----
Moving chunk to create stale mongos...
----
m30998| Thu Jun 14 01:38:42 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "inserts_consistent.coll @ 1|2||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: -1, options: 0, query: { _id: /^inserts_consistent\.coll-.*/ }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:38:42 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:42 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:42 [WriteBackListener-localhost:30001] connected connection!
m30000| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:56655 #6 (6 connections now open)
m30999| Thu Jun 14 01:38:42 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "inserts_consistent.coll-_id_0.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ns: "inserts_consistent.coll", min: { _id: 0.0 }, max: { _id: MaxKey }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
Other shard : shard0001
m30998| Thu Jun 14 01:38:42 [WriteBackListener-localhost:30000] connected connection!
m30999| Thu Jun 14 01:38:42 [conn] CMD: movechunk: { moveChunk: "inserts_consistent.coll", find: { _id: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:38:42 [conn] moving chunk ns: inserts_consistent.coll moving ( ns:inserts_consistent.coll at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:38:42 [conn4] received moveChunk request: { moveChunk: "inserts_consistent.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "inserts_consistent.coll-_id_0.0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:38:42 [conn4] created new distributed lock for inserts_consistent.coll on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:44557 #5 (5 connections now open)
m30001| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:44559 #6 (6 connections now open)
m30000| Thu Jun 14 01:38:42 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' acquired, ts : 4fd978e26338e542b9560d07
m30000| Thu Jun 14 01:38:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:42-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652322455), what: "moveChunk.start", ns: "inserts_consistent.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:38:42 [conn4] moveChunk request accepted at version 1|2||4fd978e1a4ce1fb93eb0053a
m30000| Thu Jun 14 01:38:42 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:44560 #7 (7 connections now open)
m30000| Thu Jun 14 01:38:42 [initandlisten] connection accepted from 127.0.0.1:56658 #7 (7 connections now open)
m30001| Thu Jun 14 01:38:42 [FileAllocator] allocating new datafile /data/db/test1/inserts_consistent.ns, filling with zeroes...
m30001| Thu Jun 14 01:38:42 [FileAllocator] creating directory /data/db/test1/_tmp
m30000| Thu Jun 14 01:38:42 [FileAllocator] done allocating datafile /data/db/test0/inserts_consistent.1, size: 32MB, took 0.564 secs
m30001| Thu Jun 14 01:38:43 [FileAllocator] done allocating datafile /data/db/test1/inserts_consistent.ns, size: 16MB, took 0.679 secs
m30001| Thu Jun 14 01:38:43 [FileAllocator] allocating new datafile /data/db/test1/inserts_consistent.0, filling with zeroes...
m30000| Thu Jun 14 01:38:43 [conn4] moveChunk data transfer progress: { active: true, ns: "inserts_consistent.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:38:43 [FileAllocator] done allocating datafile /data/db/test1/inserts_consistent.0, size: 16MB, took 0.31 secs
m30001| Thu Jun 14 01:38:43 [migrateThread] build index inserts_consistent.coll { _id: 1 }
m30001| Thu Jun 14 01:38:43 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:43 [migrateThread] info: creating collection inserts_consistent.coll on add index
m30001| Thu Jun 14 01:38:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'inserts_consistent.coll' { _id: 0.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:38:43 [FileAllocator] allocating new datafile /data/db/test1/inserts_consistent.1, filling with zeroes...
m30001| Thu Jun 14 01:38:44 [FileAllocator] done allocating datafile /data/db/test1/inserts_consistent.1, size: 32MB, took 0.669 secs
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk data transfer progress: { active: true, ns: "inserts_consistent.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk setting version to: 2|0||4fd978e1a4ce1fb93eb0053a
m30001| Thu Jun 14 01:38:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'inserts_consistent.coll' { _id: 0.0 } -> { _id: MaxKey }
m30001| Thu Jun 14 01:38:44 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:44-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652324479), what: "moveChunk.to", ns: "inserts_consistent.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 5: 1227, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 794 } }
m29000| Thu Jun 14 01:38:44 [initandlisten] connection accepted from 127.0.0.1:54893 #12 (12 connections now open)
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "inserts_consistent.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk updating self version to: 2|1||4fd978e1a4ce1fb93eb0053a through { _id: MinKey } -> { _id: 0.0 } for collection 'inserts_consistent.coll'
m30000| Thu Jun 14 01:38:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:44-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652324484), what: "moveChunk.commit", ns: "inserts_consistent.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:38:44 [conn4] doing delete inline
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk deleted: 0
m30000| Thu Jun 14 01:38:44 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' unlocked.
m30000| Thu Jun 14 01:38:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:44-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652324484), what: "moveChunk.from", ns: "inserts_consistent.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2011, step5 of 6: 16, step6 of 6: 0 } }
m30000| Thu Jun 14 01:38:44 [conn4] command admin.$cmd command: { moveChunk: "inserts_consistent.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "inserts_consistent.coll-_id_0.0", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:326 w:793947 reslen:37 2030ms
m30999| Thu Jun 14 01:38:44 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:38:44 [conn] loading chunk manager for collection inserts_consistent.coll using old chunk manager w/ version 1|2||4fd978e1a4ce1fb93eb0053a and 2 chunks
m30999| Thu Jun 14 01:38:44 [conn] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 2|1||4fd978e1a4ce1fb93eb0053a
m30999| Thu Jun 14 01:38:44 [conn] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 4 version: 2|1||4fd978e1a4ce1fb93eb0053a based on: 1|2||4fd978e1a4ce1fb93eb0053a
{ "millis" : 2031, "ok" : 1 }
----
Inserting docs to be written back...
----
"Inserting -1"
"Inserting -2"
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "inserts_consistent.coll", id: ObjectId('4fd978e40000000000000000'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), yourVersion: Timestamp 1000|2, yourVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [conn] about to initiate autosplit: ns:inserts_consistent.coll at: shard0000:localhost:30000 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 } dataWritten: 3971212 splitThreshold: 471859
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd978e40000000000000000 needVersion : 2|0||4fd978e1a4ce1fb93eb0053a mine : 1|2||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] op: insert len: 79 ns: inserts_consistent.coll{ _id: -1.0, hello: "world" }
m30998| Thu Jun 14 01:38:44 [conn] chunk not full enough to trigger auto-split no split entry
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] loading chunk manager for collection inserts_consistent.coll using old chunk manager w/ version 1|2||4fd978e1a4ce1fb93eb0053a and 2 chunks
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 2|1||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 3 version: 2|1||4fd978e1a4ce1fb93eb0053a based on: 1|2||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:38:44 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30998| Thu Jun 14 01:38:44 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] connected connection!
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e14f0e417772ff3686'), authoritative: true }
m30998| Thu Jun 14 01:38:44 [conn] found 0 dropped collections and 0 sharded collections for database admin
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 0 current: 3 version: 2|1||4fd978e1a4ce1fb93eb0053a manager: 0x8f367b8
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f339f0
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:38:44 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:38:44 [initandlisten] connection accepted from 127.0.0.1:56660 #8 (8 connections now open)
m30001| Thu Jun 14 01:38:44 [initandlisten] connection accepted from 127.0.0.1:44564 #8 (8 connections now open)
{ "flushed" : true, "ok" : 1 }
----
Inserting doc which successfully goes through...
----
m30001| Thu Jun 14 01:38:44 [conn8] no current chunk manager found for this shard, will initialize
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] connected connection!
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd978e14f0e417772ff3686'), authoritative: true }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] DBConfig unserialize: inserts_consistent { _id: "inserts_consistent", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 2|1||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 4 version: 2|1||4fd978e1a4ce1fb93eb0053a based on: (empty)
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database inserts_consistent
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 0 current: 4 version: 2|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f382e0
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f37e88
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion failed!
m30998| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "inserts_consistent.coll", need_authoritative: true, errmsg: "first time for collection 'inserts_consistent.coll'", ok: 0.0 }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 0 current: 4 version: 2|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f382e0
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8f37e88
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 3 current: 4 version: 2|1||4fd978e1a4ce1fb93eb0053a manager: 0x8f382e0
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f339f0
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [conn] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 2 current: 4 version: 2|1||4fd978e1a4ce1fb93eb0053a manager: 0x8f382e0
m30998| Thu Jun 14 01:38:44 [conn] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f35118
m30998| Thu Jun 14 01:38:44 [conn] setShardVersion success: { oldVersion: Timestamp 1000|2, oldVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "inserts_consistent.coll", id: ObjectId('4fd978e40000000000000001'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), yourVersion: Timestamp 1000|2, yourVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd978e40000000000000001 needVersion : 2|0||4fd978e1a4ce1fb93eb0053a mine : 2|1||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] op: insert len: 79 ns: inserts_consistent.coll{ _id: -2.0, hello: "world" }
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 2|0||4fd978e1a4ce1fb93eb0053a, at version 2|1||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:44 [conn] about to initiate autosplit: ns:inserts_consistent.coll at: shard0000:localhost:30000 lastmod: 2|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 } dataWritten: 4750578 splitThreshold: 471859
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:38:44 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:38:44 [initandlisten] connection accepted from 127.0.0.1:56662 #9 (9 connections now open)
m30998| Thu Jun 14 01:38:44 [conn] chunk not full enough to trigger auto-split no split entry
m30998| Thu Jun 14 01:38:44 [WriteBackListener-localhost:30000] connected connection!
{
"singleShard" : "localhost:30000",
"n" : 0,
"connectionId" : 8,
"err" : null,
"ok" : 1,
"writeback" : ObjectId("4fd978e40000000000000001"),
"instanceIdent" : "domU-12-31-39-01-70-B4:30000",
"writebackGLE" : {
"singleShard" : "localhost:30000",
"n" : 0,
"connectionId" : 8,
"err" : null,
"ok" : 1
},
"initialGLEHost" : "localhost:30000"
}
----
GLE waited for the writebacks.
----
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] creating pcursor over QSpec { ns: "inserts_consistent.coll", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] initializing over 2 shards required by [inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a]
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] have to set shard version for conn: localhost:30000 ns:inserts_consistent.coll my last seq: 2 current: 4 version: 2|1||4fd978e1a4ce1fb93eb0053a manager: 0x8f6bf78
m30999| Thu Jun 14 01:38:44 [conn] setShardVersion shard0000 localhost:30000 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), shard: "shard0000", shardHost: "localhost:30000" } 0x8f66148
m30999| Thu Jun 14 01:38:44 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ok: 1.0 }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] needed to set remote version on connection to value compatible with [inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a]
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 0 current: 4 version: 2|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f6bf78
m30999| Thu Jun 14 01:38:44 [conn] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f669b0
m30999| Thu Jun 14 01:38:44 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] needed to set remote version on connection to value compatible with [inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a]
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] finishing over 2 shards
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: { _id: -1.0, hello: "world" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:44 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "inserts_consistent.coll @ 2|1||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
----
Now try moving the actual chunk we're writing to...
----
m30001| Thu Jun 14 01:38:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'inserts_consistent.coll' { _id: MinKey } -> { _id: 0.0 }
m30999| Thu Jun 14 01:38:44 [conn] CMD: movechunk: { moveChunk: "inserts_consistent.coll", find: { _id: -1.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:38:44 [conn] moving chunk ns: inserts_consistent.coll moving ( ns:inserts_consistent.coll at: shard0000:localhost:30000 lastmod: 2|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:38:44 [conn4] received moveChunk request: { moveChunk: "inserts_consistent.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "inserts_consistent.coll-_id_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:38:44 [conn4] created new distributed lock for inserts_consistent.coll on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:38:44 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' acquired, ts : 4fd978e46338e542b9560d08
m30000| Thu Jun 14 01:38:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:44-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652324497), what: "moveChunk.start", ns: "inserts_consistent.coll", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk request accepted at version 2|1||4fd978e1a4ce1fb93eb0053a
m30000| Thu Jun 14 01:38:44 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:38:45 [conn4] moveChunk data transfer progress: { active: true, ns: "inserts_consistent.coll", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 107, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:38:45 [conn4] moveChunk setting version to: 3|0||4fd978e1a4ce1fb93eb0053a
m30001| Thu Jun 14 01:38:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'inserts_consistent.coll' { _id: MinKey } -> { _id: 0.0 }
m30001| Thu Jun 14 01:38:45 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:45-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652325507), what: "moveChunk.to", ns: "inserts_consistent.coll", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30000| Thu Jun 14 01:38:45 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "inserts_consistent.coll", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 107, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:38:45 [conn4] moveChunk moved last chunk out for collection 'inserts_consistent.coll'
m30000| Thu Jun 14 01:38:45 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:45-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652325511), what: "moveChunk.commit", ns: "inserts_consistent.coll", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:38:45 [conn4] doing delete inline
m30000| Thu Jun 14 01:38:45 [conn4] moveChunk deleted: 3
m30000| Thu Jun 14 01:38:45 [conn4] distributed lock 'inserts_consistent.coll/domU-12-31-39-01-70-B4:30000:1339652322:1548992921' unlocked.
m30000| Thu Jun 14 01:38:45 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:45-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:56649", time: new Date(1339652325513), what: "moveChunk.from", ns: "inserts_consistent.coll", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 1 } }
m30000| Thu Jun 14 01:38:45 [conn4] command admin.$cmd command: { moveChunk: "inserts_consistent.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "inserts_consistent.coll-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:399 w:794827 reslen:37 1017ms
m30999| Thu Jun 14 01:38:45 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:38:45 [conn] loading chunk manager for collection inserts_consistent.coll using old chunk manager w/ version 2|1||4fd978e1a4ce1fb93eb0053a and 2 chunks
m30999| Thu Jun 14 01:38:45 [conn] loaded 1 chunks into new chunk manager for inserts_consistent.coll with version 3|0||4fd978e1a4ce1fb93eb0053a
m30999| Thu Jun 14 01:38:45 [conn] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 5 version: 3|0||4fd978e1a4ce1fb93eb0053a based on: 2|1||4fd978e1a4ce1fb93eb0053a
{ "millis" : 1018, "ok" : 1 }
----
Inserting second docs to get written back...
----
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "inserts_consistent.coll", id: ObjectId('4fd978e50000000000000002'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), yourVersion: Timestamp 2000|1, yourVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:38:45 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30998| Thu Jun 14 01:38:45 [conn] found 0 dropped collections and 0 sharded collections for database admin
{ "flushed" : true, "ok" : 1 }
----
Inserting second doc which successfully goes through...
----
----
GLE is now waiting for the writeback!
----
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] DBConfig unserialize: inserts_consistent { _id: "inserts_consistent", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 3|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 5 version: 3|0||4fd978e1a4ce1fb93eb0053a based on: (empty)
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database inserts_consistent
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd978e50000000000000002 needVersion : 0|0||000000000000000000000000 mine : 3|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] op: insert len: 79 ns: inserts_consistent.coll{ _id: -4.0, hello: "world" }
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] new version change detected to 0|0||000000000000000000000000, 2 writebacks processed at 2|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [conn] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 2 current: 5 version: 3|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f367b8
m30998| Thu Jun 14 01:38:45 [conn] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f36d60
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] warning: reloading config data for inserts_consistent, wanted version 0|0||000000000000000000000000 but currently have version 3|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] DBConfig unserialize: inserts_consistent { _id: "inserts_consistent", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:38:45 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:38:45 [conn] about to initiate autosplit: ns:inserts_consistent.coll at: shard0001:localhost:30001 lastmod: 3|0||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 } dataWritten: 8312784 splitThreshold: 471859
m30998| Thu Jun 14 01:38:45 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for inserts_consistent.coll with version 3|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for inserts_consistent.coll: 0ms sequenceNumber: 6 version: 3|0||4fd978e1a4ce1fb93eb0053a based on: (empty)
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database inserts_consistent
m30998| Thu Jun 14 01:38:45 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 4 current: 6 version: 3|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f39a68
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e14f0e417772ff3686'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f37e88
m30001| Thu Jun 14 01:38:45 [initandlisten] connection accepted from 127.0.0.1:44566 #9 (9 connections now open)
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ok: 1.0 }
m30998| Thu Jun 14 01:38:45 [conn] connected connection!
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "inserts_consistent.coll", id: ObjectId('4fd978e50000000000000003'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), yourVersion: Timestamp 2000|1, yourVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd978e50000000000000003 needVersion : 0|0||000000000000000000000000 mine : 3|0||4fd978e1a4ce1fb93eb0053a
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] op: insert len: 79 ns: inserts_consistent.coll{ _id: -5.0, hello: "world" }
m30998| Thu Jun 14 01:38:45 [conn] chunk not full enough to trigger auto-split no split entry
m30998| Thu Jun 14 01:38:45 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 0|0||000000000000000000000000, at version 3|0||4fd978e1a4ce1fb93eb0053a
----
All docs written this time!
----
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] creating pcursor over QSpec { ns: "inserts_consistent.coll", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] initializing over 1 shards required by [inserts_consistent.coll @ 3|0||4fd978e1a4ce1fb93eb0053a]
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:45 [conn] have to set shard version for conn: localhost:30001 ns:inserts_consistent.coll my last seq: 4 current: 5 version: 3|0||4fd978e1a4ce1fb93eb0053a manager: 0x8f6c178
m30999| Thu Jun 14 01:38:45 [conn] setShardVersion shard0001 localhost:30001 inserts_consistent.coll { setShardVersion: "inserts_consistent.coll", configdb: "localhost:29000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), serverID: ObjectId('4fd978e1a4ce1fb93eb00538'), shard: "shard0001", shardHost: "localhost:30001" } 0x8f669b0
----
DONE
----
m30999| Thu Jun 14 01:38:45 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd978e1a4ce1fb93eb0053a'), ok: 1.0 }
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] needed to set remote version on connection to value compatible with [inserts_consistent.coll @ 3|0||4fd978e1a4ce1fb93eb0053a]
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "inserts_consistent.coll @ 3|0||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "inserts_consistent.coll @ 3|0||4fd978e1a4ce1fb93eb0053a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:45 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "inserts_consistent.coll @ 3|0||4fd978e1a4ce1fb93eb0053a", cursor: { _id: -1.0, hello: "world" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:45 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:38:45 [conn3] end connection 127.0.0.1:54866 (11 connections now open)
m29000| Thu Jun 14 01:38:45 [conn4] end connection 127.0.0.1:54870 (10 connections now open)
m30001| Thu Jun 14 01:38:45 [conn3] end connection 127.0.0.1:44550 (8 connections now open)
m30000| Thu Jun 14 01:38:45 [conn3] end connection 127.0.0.1:56646 (8 connections now open)
m30000| Thu Jun 14 01:38:45 [conn4] end connection 127.0.0.1:56649 (7 connections now open)
m29000| Thu Jun 14 01:38:45 [conn9] end connection 127.0.0.1:54882 (9 connections now open)
m29000| Thu Jun 14 01:38:45 [conn5] end connection 127.0.0.1:54871 (9 connections now open)
m30001| Thu Jun 14 01:38:45 [conn4] end connection 127.0.0.1:44553 (7 connections now open)
Thu Jun 14 01:38:46 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:38:46 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:38:46 [conn6] end connection 127.0.0.1:54874 (7 connections now open)
m29000| Thu Jun 14 01:38:46 [conn7] end connection 127.0.0.1:54875 (6 connections now open)
m29000| Thu Jun 14 01:38:46 [conn8] end connection 127.0.0.1:54876 (5 connections now open)
m30001| Thu Jun 14 01:38:46 [conn5] end connection 127.0.0.1:44557 (6 connections now open)
m30001| Thu Jun 14 01:38:46 [conn8] end connection 127.0.0.1:44564 (5 connections now open)
m30001| Thu Jun 14 01:38:46 [conn9] end connection 127.0.0.1:44566 (4 connections now open)
m30000| Thu Jun 14 01:38:46 [conn5] end connection 127.0.0.1:56653 (6 connections now open)
m30000| Thu Jun 14 01:38:46 [conn6] end connection 127.0.0.1:56655 (5 connections now open)
m30000| Thu Jun 14 01:38:46 [conn8] end connection 127.0.0.1:56660 (4 connections now open)
Thu Jun 14 01:38:47 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:38:47 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:38:47 [interruptThread] now exiting
m30000| Thu Jun 14 01:38:47 dbexit:
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:38:47 [interruptThread] closing listening socket: 33
m30000| Thu Jun 14 01:38:47 [interruptThread] closing listening socket: 34
m30000| Thu Jun 14 01:38:47 [interruptThread] closing listening socket: 35
m30000| Thu Jun 14 01:38:47 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:38:47 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:38:47 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:38:47 dbexit: really exiting now
m29000| Thu Jun 14 01:38:47 [conn11] end connection 127.0.0.1:54886 (4 connections now open)
m29000| Thu Jun 14 01:38:47 [conn10] end connection 127.0.0.1:54885 (3 connections now open)
m30001| Thu Jun 14 01:38:47 [conn7] end connection 127.0.0.1:44560 (3 connections now open)
Thu Jun 14 01:38:48 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:38:48 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:38:48 [interruptThread] now exiting
m30001| Thu Jun 14 01:38:48 dbexit:
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:38:48 [interruptThread] closing listening socket: 36
m30001| Thu Jun 14 01:38:48 [interruptThread] closing listening socket: 37
m30001| Thu Jun 14 01:38:48 [interruptThread] closing listening socket: 38
m30001| Thu Jun 14 01:38:48 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:38:48 [conn12] end connection 127.0.0.1:54893 (2 connections now open)
m30001| Thu Jun 14 01:38:48 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:38:48 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:38:48 dbexit: really exiting now
Thu Jun 14 01:38:49 shell: stopped mongo program on port 30001
m29000| Thu Jun 14 01:38:49 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:38:49 [interruptThread] now exiting
m29000| Thu Jun 14 01:38:49 dbexit:
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:38:49 [interruptThread] closing listening socket: 39
m29000| Thu Jun 14 01:38:49 [interruptThread] closing listening socket: 40
m29000| Thu Jun 14 01:38:49 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:38:49 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:38:49 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:38:49 dbexit: really exiting now
Thu Jun 14 01:38:50 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 10.608 seconds ***
10685.956955ms
Thu Jun 14 01:38:50 [initandlisten] connection accepted from 127.0.0.1:35132 #38 (25 connections now open)
*******************************************
Test : jumbo1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/jumbo1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/jumbo1.js";TestData.testFile = "jumbo1.js";TestData.testName = "jumbo1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:38:50 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/jump10'
Thu Jun 14 01:38:50 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/jump10
m30000| Thu Jun 14 01:38:50
m30000| Thu Jun 14 01:38:50 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:38:50
m30000| Thu Jun 14 01:38:50 [initandlisten] MongoDB starting : pid=26229 port=30000 dbpath=/data/db/jump10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:38:50 [initandlisten]
m30000| Thu Jun 14 01:38:50 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:38:50 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:38:50 [initandlisten]
m30000| Thu Jun 14 01:38:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:38:50 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:38:50 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:38:50 [initandlisten]
m30000| Thu Jun 14 01:38:50 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:38:50 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:38:50 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:38:50 [initandlisten] options: { dbpath: "/data/db/jump10", port: 30000 }
m30000| Thu Jun 14 01:38:50 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:38:50 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/jump11'
m30000| Thu Jun 14 01:38:50 [initandlisten] connection accepted from 127.0.0.1:56666 #1 (1 connection now open)
Thu Jun 14 01:38:50 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/jump11
m30001| Thu Jun 14 01:38:50
m30001| Thu Jun 14 01:38:50 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:38:50
m30001| Thu Jun 14 01:38:50 [initandlisten] MongoDB starting : pid=26242 port=30001 dbpath=/data/db/jump11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:38:50 [initandlisten]
m30001| Thu Jun 14 01:38:50 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:38:50 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:38:50 [initandlisten]
m30001| Thu Jun 14 01:38:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:38:50 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:38:50 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:38:50 [initandlisten]
m30001| Thu Jun 14 01:38:50 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:38:50 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:38:50 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:38:50 [initandlisten] options: { dbpath: "/data/db/jump11", port: 30001 }
m30001| Thu Jun 14 01:38:50 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:38:50 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:44571 #1 (1 connection now open)
ShardingTest jump1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:38:51 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -vv
m30999| Thu Jun 14 01:38:51 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:38:51 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26256 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:38:51 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:38:51 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:38:51 [mongosMain] options: { configdb: "localhost:30000", port: 30999, vv: true }
m30999| Thu Jun 14 01:38:51 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:38:51 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [mongosMain] connected connection!
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56669 #2 (2 connections now open)
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56670 #3 (3 connections now open)
m30000| Thu Jun 14 01:38:51 [FileAllocator] allocating new datafile /data/db/jump10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:38:51 [FileAllocator] creating directory /data/db/jump10/_tmp
m30000| Thu Jun 14 01:38:51 [FileAllocator] done allocating datafile /data/db/jump10/config.ns, size: 16MB, took 0.292 secs
m30000| Thu Jun 14 01:38:51 [FileAllocator] allocating new datafile /data/db/jump10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:38:51 [FileAllocator] done allocating datafile /data/db/jump10/config.0, size: 16MB, took 0.256 secs
m30000| Thu Jun 14 01:38:51 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn2] insert config.settings keyUpdates:0 locks(micros) w:560547 560ms
m30000| Thu Jun 14 01:38:51 [FileAllocator] allocating new datafile /data/db/jump10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56674 #4 (4 connections now open)
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56675 #5 (5 connections now open)
m30000| Thu Jun 14 01:38:51 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:38:51 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:38:51 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56676 #6 (6 connections now open)
m30000| Thu Jun 14 01:38:51 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:38:51 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:51 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [mongosMain] connected connection!
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:38:51 [mongosMain] MaxChunkSize: 1
m30999| Thu Jun 14 01:38:51 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:38:51 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:38:51 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:38:51 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:38:51 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:38:51 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:38:51 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:38:51
m30999| Thu Jun 14 01:38:51 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:38:51 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [Balancer] connected connection!
m30999| Thu Jun 14 01:38:51 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:38:51 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:38:51 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:38:51 [Balancer] skew from remote server localhost:30000 found: 0
m30999| Thu Jun 14 01:38:51 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds.
m30999| Thu Jun 14 01:38:51 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:38:51 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:38:51 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd978ebf59a384fa81d5cfe" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:38:51 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd978ebf59a384fa81d5cfe
m30999| Thu Jun 14 01:38:51 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:38:51 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652331:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:38:51 [Balancer] no collections to balance
m30999| Thu Jun 14 01:38:51 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:38:51 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:38:51 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:38:51 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652331:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:38:51 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:38:51 [mongosMain] connection accepted from 127.0.0.1:53558 #1 (1 connection now open)
m30999| Thu Jun 14 01:38:51 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:38:51 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:38:51 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30999| Thu Jun 14 01:38:51 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [conn] connected connection!
m30999| Thu Jun 14 01:38:51 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:38:51 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:38:51 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:38:51 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:38:51 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:38:51 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { x: 1.0 } }
m30999| Thu Jun 14 01:38:51 [conn] enable sharding on: test.foo with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:38:51 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:51 [conn] loaded 1 chunks into new chunk manager for test.foo with version 1|0||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:51 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd978ebf59a384fa81d5cff based on: (empty)
m30999| Thu Jun 14 01:38:51 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [conn] connected connection!
m30999| Thu Jun 14 01:38:51 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd978ebf59a384fa81d5cfd
m30999| Thu Jun 14 01:38:51 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:38:51 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), authoritative: true }
m30999| Thu Jun 14 01:38:51 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:38:51 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x8cc90e0
m30999| Thu Jun 14 01:38:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0000", shardHost: "localhost:30000" } 0x8cc94a0
m30999| Thu Jun 14 01:38:51 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:38:51 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:51 [conn] connected connection!
m30999| Thu Jun 14 01:38:51 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd978ebf59a384fa81d5cfd
m30999| Thu Jun 14 01:38:51 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:38:51 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), authoritative: true }
m30999| Thu Jun 14 01:38:51 BackgroundJob starting: WriteBackListener-localhost:30001
m30000| Thu Jun 14 01:38:51 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:38:51 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:56679 #7 (7 connections now open)
m30001| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:44581 #2 (2 connections now open)
m30001| Thu Jun 14 01:38:51 [FileAllocator] allocating new datafile /data/db/jump11/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:38:51 [FileAllocator] creating directory /data/db/jump11/_tmp
m30001| Thu Jun 14 01:38:51 [initandlisten] connection accepted from 127.0.0.1:44583 #3 (3 connections now open)
m30000| Thu Jun 14 01:38:52 [FileAllocator] done allocating datafile /data/db/jump10/config.1, size: 32MB, took 0.548 secs
m30001| Thu Jun 14 01:38:52 [FileAllocator] done allocating datafile /data/db/jump11/test.ns, size: 16MB, took 0.322 secs
m30001| Thu Jun 14 01:38:52 [FileAllocator] allocating new datafile /data/db/jump11/test.0, filling with zeroes...
m30001| Thu Jun 14 01:38:52 [FileAllocator] done allocating datafile /data/db/jump11/test.0, size: 16MB, took 0.331 secs
m30001| Thu Jun 14 01:38:52 [FileAllocator] allocating new datafile /data/db/jump11/test.1, filling with zeroes...
m30001| Thu Jun 14 01:38:52 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:38:52 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:52 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:38:52 [conn2] build index test.foo { x: 1.0 }
m30001| Thu Jun 14 01:38:52 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:52 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:6 W:60 r:250 w:1140302 1140ms
m30001| Thu Jun 14 01:38:52 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:73 reslen:51 1137ms
m30001| Thu Jun 14 01:38:52 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:38:52 [initandlisten] connection accepted from 127.0.0.1:44585 #4 (4 connections now open)
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30999| Thu Jun 14 01:38:52 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd978ebf59a384fa81d5cff manager: 0x8cc90e0
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:38:52 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd978ebf59a384fa81d5cff manager: 0x8cc90e0
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } dataWritten: 68311 splitThreshold: 921
m30999| Thu Jun 14 01:38:52 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:38:52 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:38:52 [conn] connected connection!
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } dataWritten: 10043 splitThreshold: 921
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 1.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } dataWritten: 10043 splitThreshold: 921
m30000| Thu Jun 14 01:38:52 [initandlisten] connection accepted from 127.0.0.1:56681 #8 (8 connections now open)
m30999| Thu Jun 14 01:38:52 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||4fd978ebf59a384fa81d5cff and 1 chunks
m30999| Thu Jun 14 01:38:52 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|2||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd978ebf59a384fa81d5cff based on: 1|0||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:38:52 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 2 current: 3 version: 1|2||4fd978ebf59a384fa81d5cff manager: 0x8cc7c28
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 152345 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 100430 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 100430 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 100430 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 100430 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } dataWritten: 100430 splitThreshold: 471859
m30999| Thu Jun 14 01:38:52 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|2||4fd978ebf59a384fa81d5cff and 2 chunks
m30999| Thu Jun 14 01:38:52 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|4||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd978ebf59a384fa81d5cff based on: 1|2||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } on: { x: 53.0 } (splitThreshold 471859) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:52 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 3 current: 4 version: 1|4||4fd978ebf59a384fa81d5cff manager: 0x8cc90e0
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 195400 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 105.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 105.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 105.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|4||4fd978ebf59a384fa81d5cff and 3 chunks
m30999| Thu Jun 14 01:38:52 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|6||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd978ebf59a384fa81d5cff based on: 1|4||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey } on: { x: 173.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:52 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 4 current: 5 version: 1|6||4fd978ebf59a384fa81d5cff manager: 0x8cc7970
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:52 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 195820 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 225.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 225.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30001| Thu Jun 14 01:38:52 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 0.0 } ], shardId: "test.foo-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:52 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978ecbb51f6302c92cdd9
m30001| Thu Jun 14 01:38:52 [conn4] splitChunk accepted at version 1|0||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:52 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:52-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652332889), what: "split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 53.0 } ], shardId: "test.foo-x_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:52 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978ecbb51f6302c92cdda
m30001| Thu Jun 14 01:38:52 [conn4] splitChunk accepted at version 1|2||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:52 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:52-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652332915), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 53.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 53.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 53.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 173.0 } ], shardId: "test.foo-x_53.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:52 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978ecbb51f6302c92cddb
m30001| Thu Jun 14 01:38:52 [conn4] splitChunk accepted at version 1|4||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:52 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:52-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652332959), what: "split", ns: "test.foo", details: { before: { min: { x: 53.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 53.0 }, max: { x: 173.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 173.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652332:1880705399 (sleeping for 30000ms)
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30000| Thu Jun 14 01:38:52 [initandlisten] connection accepted from 127.0.0.1:56683 #9 (9 connections now open)
m30999| Thu Jun 14 01:38:52 [conn] chunk not full enough to trigger auto-split { x: 225.0 }
m30999| Thu Jun 14 01:38:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30001| Thu Jun 14 01:38:52 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:52 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 173.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 292.0 } ], shardId: "test.foo-x_173.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:52 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978ecbb51f6302c92cddc
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|6||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333001), what: "split", ns: "test.foo", details: { before: { min: { x: 173.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 173.0 }, max: { x: 292.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 292.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|6||4fd978ebf59a384fa81d5cff and 4 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|8||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||4fd978ebf59a384fa81d5cff based on: 1|6||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 173.0 } max: { x: MaxKey } on: { x: 292.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 5 current: 6 version: 1|8||4fd978ebf59a384fa81d5cff manager: 0x8cc90e0
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 193594 splitThreshold: 943718
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 344.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 292.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 401.0 } ], shardId: "test.foo-x_292.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cddd
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|8||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333020), what: "split", ns: "test.foo", details: { before: { min: { x: 292.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 292.0 }, max: { x: 401.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 401.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 401.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 500.0 } ], shardId: "test.foo-x_401.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cdde
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|10||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333040), what: "split", ns: "test.foo", details: { before: { min: { x: 401.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 401.0 }, max: { x: 500.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 500.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 500.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] warning: chunk is larger than 1048576 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 500.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 563.0 } ], shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cddf
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|12||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333132), what: "split", ns: "test.foo", details: { before: { min: { x: 500.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 500.0 }, max: { x: 563.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 563.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 563.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 563.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 675.0 } ], shardId: "test.foo-x_563.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde0
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|14||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333150), what: "split", ns: "test.foo", details: { before: { min: { x: 563.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 563.0 }, max: { x: 675.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 675.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 675.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 675.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 787.0 } ], shardId: "test.foo-x_675.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde1
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|16||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333169), what: "split", ns: "test.foo", details: { before: { min: { x: 675.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 675.0 }, max: { x: 787.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 787.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 344.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 344.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|8||4fd978ebf59a384fa81d5cff and 5 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|10||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||4fd978ebf59a384fa81d5cff based on: 1|8||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 292.0 } max: { x: MaxKey } on: { x: 401.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 6 current: 7 version: 1|10||4fd978ebf59a384fa81d5cff manager: 0x8ccb960
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 209868 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 453.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 453.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 453.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|10||4fd978ebf59a384fa81d5cff and 6 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|12||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||4fd978ebf59a384fa81d5cff based on: 1|10||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { x: 401.0 } max: { x: MaxKey } on: { x: 500.0 } (splitThreshold 943718)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 7 current: 8 version: 1|12||4fd978ebf59a384fa81d5cff manager: 0x8ccb538
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|12, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 197986 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 501.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 501.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 501.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|12||4fd978ebf59a384fa81d5cff and 7 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|14||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||4fd978ebf59a384fa81d5cff based on: 1|12||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { x: 500.0 } max: { x: MaxKey } on: { x: 563.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 8 current: 9 version: 1|14||4fd978ebf59a384fa81d5cff manager: 0x8cc9130
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|14, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 193164 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 615.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 615.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 615.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|14||4fd978ebf59a384fa81d5cff and 8 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|16||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||4fd978ebf59a384fa81d5cff based on: 1|14||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { x: 563.0 } max: { x: MaxKey } on: { x: 675.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 9 current: 10 version: 1|16||4fd978ebf59a384fa81d5cff manager: 0x8cd3228
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|16, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 194838 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 727.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 727.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 727.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|16||4fd978ebf59a384fa81d5cff and 9 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|18||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||4fd978ebf59a384fa81d5cff based on: 1|16||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { x: 675.0 } max: { x: MaxKey } on: { x: 787.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 10 current: 11 version: 1|18||4fd978ebf59a384fa81d5cff manager: 0x8ccb960
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|18, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 210212 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 839.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 839.0 }
m30001| Thu Jun 14 01:38:53 [FileAllocator] done allocating datafile /data/db/jump11/test.1, size: 32MB, took 0.672 secs
m30001| Thu Jun 14 01:38:53 [FileAllocator] allocating new datafile /data/db/jump11/test.2, filling with zeroes...
m30001| Thu Jun 14 01:38:53 [conn3] insert test.foo keyUpdates:0 locks(micros) W:85 r:1144 w:404915 340ms
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 787.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 787.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 902.0 } ], shardId: "test.foo-x_787.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde2
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|18||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333530), what: "split", ns: "test.foo", details: { before: { min: { x: 787.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 787.0 }, max: { x: 902.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 902.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 902.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 902.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1007.0 } ], shardId: "test.foo-x_902.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde3
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|20||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333547), what: "split", ns: "test.foo", details: { before: { min: { x: 902.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 902.0 }, max: { x: 1007.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1007.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1007.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1007.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1124.0 } ], shardId: "test.foo-x_1007.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde4
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|22||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333596), what: "split", ns: "test.foo", details: { before: { min: { x: 1007.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1007.0 }, max: { x: 1124.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1124.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1124.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1124.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1243.0 } ], shardId: "test.foo-x_1124.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde5
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|24||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333616), what: "split", ns: "test.foo", details: { before: { min: { x: 1124.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1124.0 }, max: { x: 1243.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1243.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1243.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1243.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1357.0 } ], shardId: "test.foo-x_1243.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde6
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|26||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333650), what: "split", ns: "test.foo", details: { before: { min: { x: 1243.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1243.0 }, max: { x: 1357.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1357.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1357.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1357.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1468.0 } ], shardId: "test.foo-x_1357.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde7
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|28||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333668), what: "split", ns: "test.foo", details: { before: { min: { x: 1357.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1357.0 }, max: { x: 1468.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1468.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1468.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1468.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1583.0 } ], shardId: "test.foo-x_1468.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde8
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|30||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333687), what: "split", ns: "test.foo", details: { before: { min: { x: 1468.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1468.0 }, max: { x: 1583.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1583.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1583.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1583.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1703.0 } ], shardId: "test.foo-x_1583.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cde9
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|32||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333707), what: "split", ns: "test.foo", details: { before: { min: { x: 1583.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1583.0 }, max: { x: 1703.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1703.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1703.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1703.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1820.0 } ], shardId: "test.foo-x_1703.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cdea
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|34||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333728), what: "split", ns: "test.foo", details: { before: { min: { x: 1703.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1703.0 }, max: { x: 1820.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1820.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 20
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(1000, 1)
{ "x" : 0 } -->> { "x" : 53 } on : shard0001 Timestamp(1000, 3)
{ "x" : 53 } -->> { "x" : 173 } on : shard0001 Timestamp(1000, 5)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(1000, 7)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
command failed: { "ok" : 0, "errmsg" : "that chunk is already on that shard" }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1820.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1820.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 1935.0 } ], shardId: "test.foo-x_1820.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cdeb
m30001| Thu Jun 14 01:38:53 [conn4] splitChunk accepted at version 1|36||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333751), what: "split", ns: "test.foo", details: { before: { min: { x: 1820.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 1820.0 }, max: { x: 1935.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 1935.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1935.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1935.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] request split points lookup for chunk test.foo { : 1935.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:38:53 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 0.0 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:38:53 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:38:53 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978edbb51f6302c92cdec
m30001| Thu Jun 14 01:38:53 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:53-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652333796), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:38:53 [conn4] moveChunk request accepted at version 1|38||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:38:53 [conn4] moveChunk number of documents: 53
m30001| Thu Jun 14 01:38:53 [initandlisten] connection accepted from 127.0.0.1:44587 #5 (5 connections now open)
m30000| Thu Jun 14 01:38:53 [FileAllocator] allocating new datafile /data/db/jump10/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 839.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|18||4fd978ebf59a384fa81d5cff and 10 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|20||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||4fd978ebf59a384fa81d5cff based on: 1|18||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { x: 787.0 } max: { x: MaxKey } on: { x: 902.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 11 current: 12 version: 1|20||4fd978ebf59a384fa81d5cff manager: 0x8cd3228
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|20, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 197305 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 954.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 954.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|20||4fd978ebf59a384fa81d5cff and 11 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|22||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||4fd978ebf59a384fa81d5cff based on: 1|20||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { x: 902.0 } max: { x: MaxKey } on: { x: 1007.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 12 current: 13 version: 1|22||4fd978ebf59a384fa81d5cff manager: 0x8cc7b78
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|22, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 197855 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1059.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1059.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1059.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|22||4fd978ebf59a384fa81d5cff and 12 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|24||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||4fd978ebf59a384fa81d5cff based on: 1|22||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { x: 1007.0 } max: { x: MaxKey } on: { x: 1124.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 13 current: 14 version: 1|24||4fd978ebf59a384fa81d5cff manager: 0x8cd3228
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|24, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 191052 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1176.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1176.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1176.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|24||4fd978ebf59a384fa81d5cff and 13 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|26||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||4fd978ebf59a384fa81d5cff based on: 1|24||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { x: 1124.0 } max: { x: MaxKey } on: { x: 1243.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 14 current: 15 version: 1|26||4fd978ebf59a384fa81d5cff manager: 0x8cd37f8
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|26, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 193481 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1295.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1295.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1295.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|26||4fd978ebf59a384fa81d5cff and 14 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|28||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||4fd978ebf59a384fa81d5cff based on: 1|26||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { x: 1243.0 } max: { x: MaxKey } on: { x: 1357.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 15 current: 16 version: 1|28||4fd978ebf59a384fa81d5cff manager: 0x8ccbb30
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|28, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190996 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1409.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1409.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1409.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|28||4fd978ebf59a384fa81d5cff and 15 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|30||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||4fd978ebf59a384fa81d5cff based on: 1|28||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { x: 1357.0 } max: { x: MaxKey } on: { x: 1468.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 16 current: 17 version: 1|30||4fd978ebf59a384fa81d5cff manager: 0x8cc7b78
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|30, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 210571 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1520.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1520.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1520.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|30||4fd978ebf59a384fa81d5cff and 16 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|32||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||4fd978ebf59a384fa81d5cff based on: 1|30||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { x: 1468.0 } max: { x: MaxKey } on: { x: 1583.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 17 current: 18 version: 1|32||4fd978ebf59a384fa81d5cff manager: 0x8ccbb20
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|32, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 189215 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1635.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1635.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1635.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|32||4fd978ebf59a384fa81d5cff and 17 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|34||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||4fd978ebf59a384fa81d5cff based on: 1|32||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { x: 1583.0 } max: { x: MaxKey } on: { x: 1703.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 18 current: 19 version: 1|34||4fd978ebf59a384fa81d5cff manager: 0x8cc7b78
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|34, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 192376 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1755.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1755.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1755.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|34||4fd978ebf59a384fa81d5cff and 18 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|36||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||4fd978ebf59a384fa81d5cff based on: 1|34||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { x: 1703.0 } max: { x: MaxKey } on: { x: 1820.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 19 current: 20 version: 1|36||4fd978ebf59a384fa81d5cff manager: 0x8ccbb20
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|36, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 214346 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1872.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1872.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1872.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|36||4fd978ebf59a384fa81d5cff and 19 chunks
m30999| Thu Jun 14 01:38:53 [conn] loaded 2 chunks into new chunk manager for test.foo with version 1|38||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||4fd978ebf59a384fa81d5cff based on: 1|36||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { x: 1820.0 } max: { x: MaxKey } on: { x: 1935.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:38:53 [conn] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 20 current: 21 version: 1|38||4fd978ebf59a384fa81d5cff manager: 0x8ccb960
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|38, versionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), serverID: ObjectId('4fd978ebf59a384fa81d5cfd'), shard: "shard0001", shardHost: "localhost:30001" } 0x8cc9f40
m30999| Thu Jun 14 01:38:53 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ok: 1.0 }
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { x: 1935.0 } max: { x: MaxKey } dataWritten: 190684 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { x: 1935.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:38:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { x: 1935.0 } max: { x: MaxKey } dataWritten: 190817 splitThreshold: 943718
m30999| Thu Jun 14 01:38:53 [conn] chunk not full enough to trigger auto-split { x: 1987.0 }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:53 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { x: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:38:53 [conn] CMD: movechunk: { moveChunk: "test.foo", find: { x: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:38:53 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { x: 0.0 } max: { x: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:38:54 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 0.0 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:38:55 [FileAllocator] done allocating datafile /data/db/jump11/test.2, size: 64MB, took 1.896 secs
m30000| Thu Jun 14 01:38:55 [FileAllocator] done allocating datafile /data/db/jump10/test.ns, size: 16MB, took 1.622 secs
m30000| Thu Jun 14 01:38:55 [FileAllocator] allocating new datafile /data/db/jump10/test.0, filling with zeroes...
m30000| Thu Jun 14 01:38:55 [FileAllocator] done allocating datafile /data/db/jump10/test.0, size: 16MB, took 0.366 secs
m30000| Thu Jun 14 01:38:55 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:38:55 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:38:55 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:38:55 [migrateThread] build index test.foo { x: 1.0 }
m30000| Thu Jun 14 01:38:55 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:38:55 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 0.0 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "clone", counts: { cloned: 53, clonedBytes: 532279, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:38:55 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 0.0 } -> { x: 53.0 }
m30000| Thu Jun 14 01:38:55 [FileAllocator] allocating new datafile /data/db/jump10/test.1, filling with zeroes...
m30000| Thu Jun 14 01:38:56 [FileAllocator] done allocating datafile /data/db/jump10/test.1, size: 32MB, took 0.685 secs
m30001| Thu Jun 14 01:38:56 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 0.0 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 53, clonedBytes: 532279, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:38:56 [conn4] moveChunk setting version to: 2|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:38:56 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 0.0 } -> { x: 53.0 }
m30000| Thu Jun 14 01:38:56 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:56-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652336816), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 53.0 }, step1 of 5: 2000, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 1009 } }
m30000| Thu Jun 14 01:38:56 [initandlisten] connection accepted from 127.0.0.1:56685 #10 (10 connections now open)
m30001| Thu Jun 14 01:38:56 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 0.0 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 53, clonedBytes: 532279, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:38:56 [conn4] moveChunk updating self version to: 2|1||4fd978ebf59a384fa81d5cff through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:38:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:56-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652336820), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:38:56 [conn4] doing delete inline
m30001| Thu Jun 14 01:38:56 [conn4] moveChunk deleted: 53
m30001| Thu Jun 14 01:38:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:38:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:38:56-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652336828), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 0.0 }, max: { x: 53.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 3010, step5 of 6: 12, step6 of 6: 6 } }
m30001| Thu Jun 14 01:38:56 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 0.0 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:35959 w:6370 reslen:37 3032ms
m30999| Thu Jun 14 01:38:56 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:38:56 [conn] loading chunk manager for collection test.foo using old chunk manager w/ version 1|38||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:38:56 [conn] loaded 3 chunks into new chunk manager for test.foo with version 2|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:56 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 2|1||4fd978ebf59a384fa81d5cff based on: 1|38||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 19
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0001 Timestamp(1000, 5)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(1000, 7)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 1, "shard0001" : 19 }
diff: 18
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:38:56 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 19
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0001 Timestamp(1000, 5)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(1000, 7)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:01 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:39:01 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:39:01 [initandlisten] connection accepted from 127.0.0.1:56686 #11 (11 connections now open)
m30999| Thu Jun 14 01:39:01 [Balancer] connected connection!
m30999| Thu Jun 14 01:39:01 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:01 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:01 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd978f5f59a384fa81d5d00" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd978ebf59a384fa81d5cfe" } }
m30999| Thu Jun 14 01:39:01 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd978f5f59a384fa81d5d00
m30999| Thu Jun 14 01:39:01 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:01 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:01 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:01 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:01 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:01 [Balancer] shard0000
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:01 [Balancer] shard0001
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] ----
m30999| Thu Jun 14 01:39:01 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:01 [Balancer] donor : 19 chunks on shard0001
m30999| Thu Jun 14 01:39:01 [Balancer] receiver : 1 chunks on shard0000
m30999| Thu Jun 14 01:39:01 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:01 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { x: MinKey } max: { x: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:01 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: MinKey }, max: { x: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:01 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:01 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978f5bb51f6302c92cded
m30001| Thu Jun 14 01:39:01 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:01-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652341677), what: "moveChunk.start", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:01 [conn4] moveChunk request accepted at version 2|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:01 [conn4] moveChunk number of documents: 0
m30000| Thu Jun 14 01:39:01 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: MinKey } -> { x: 0.0 }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 1, "shard0001" : 19 }
diff: 18
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:01 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 19
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0001 Timestamp(1000, 5)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(1000, 7)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30001| Thu Jun 14 01:39:02 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:02 [conn4] moveChunk setting version to: 3|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:02 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: MinKey } -> { x: 0.0 }
m30000| Thu Jun 14 01:39:02 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:02-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652342688), what: "moveChunk.to", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:39:02 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: MinKey }, max: { x: 0.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:02 [conn4] moveChunk updating self version to: 3|1||4fd978ebf59a384fa81d5cff through { x: 53.0 } -> { x: 173.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:02-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652342693), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:02 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:02 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:39:02 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:02 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:02-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652342693), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:39:02 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: MinKey }, max: { x: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:36055 w:6426 reslen:37 1017ms
m30999| Thu Jun 14 01:39:02 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:02 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 2|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:02 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 3|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:02 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 3|1||4fd978ebf59a384fa81d5cff based on: 2|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:02 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:02 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 2, "shard0001" : 18 }
diff: 16
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:06 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 2
shard0001 18
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0001 Timestamp(3000, 1)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(1000, 7)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:07 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:07 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:07 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd978fbf59a384fa81d5d01" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd978f5f59a384fa81d5d00" } }
m30999| Thu Jun 14 01:39:07 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd978fbf59a384fa81d5d01
m30999| Thu Jun 14 01:39:07 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:07 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:07 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:07 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:07 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:07 [Balancer] shard0000
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:07 [Balancer] shard0001
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] ----
m30999| Thu Jun 14 01:39:07 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:07 [Balancer] donor : 18 chunks on shard0001
m30999| Thu Jun 14 01:39:07 [Balancer] receiver : 2 chunks on shard0000
m30999| Thu Jun 14 01:39:07 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:07 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|1||000000000000000000000000 min: { x: 53.0 } max: { x: 173.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:07 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 53.0 }, max: { x: 173.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_53.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:07 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:07 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd978fbbb51f6302c92cdee
m30001| Thu Jun 14 01:39:07 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:07-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652347701), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 173.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:07 [conn4] moveChunk request accepted at version 3|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:07 [conn4] moveChunk number of documents: 120
m30000| Thu Jun 14 01:39:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 53.0 } -> { x: 173.0 }
m30001| Thu Jun 14 01:39:08 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 53.0 }, max: { x: 173.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 120, clonedBytes: 1205160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:08 [conn4] moveChunk setting version to: 4|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 53.0 } -> { x: 173.0 }
m30000| Thu Jun 14 01:39:08 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:08-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652348708), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 173.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 985 } }
m30001| Thu Jun 14 01:39:08 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 53.0 }, max: { x: 173.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 120, clonedBytes: 1205160, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:08 [conn4] moveChunk updating self version to: 4|1||4fd978ebf59a384fa81d5cff through { x: 173.0 } -> { x: 292.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:08-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652348713), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 173.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:08 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:08 [conn4] moveChunk deleted: 120
m30001| Thu Jun 14 01:39:08 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:08-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652348725), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 173.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 8, step6 of 6: 11 } }
m30001| Thu Jun 14 01:39:08 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 53.0 }, max: { x: 173.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_53.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:36330 w:17010 reslen:37 1024ms
m30999| Thu Jun 14 01:39:08 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:08 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 3|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:08 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 4|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:08 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 4|1||4fd978ebf59a384fa81d5cff based on: 3|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:08 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:08 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 3, "shard0001" : 17 }
diff: 14
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:11 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 3
shard0001 17
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0001 Timestamp(4000, 1)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(1000, 9)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:13 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:13 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:13 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97901f59a384fa81d5d02" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd978fbf59a384fa81d5d01" } }
m30999| Thu Jun 14 01:39:13 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97901f59a384fa81d5d02
m30999| Thu Jun 14 01:39:13 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:13 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:13 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:13 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:13 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:13 [Balancer] shard0000
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:13 [Balancer] shard0001
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] ----
m30999| Thu Jun 14 01:39:13 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:13 [Balancer] donor : 17 chunks on shard0001
m30999| Thu Jun 14 01:39:13 [Balancer] receiver : 3 chunks on shard0000
m30999| Thu Jun 14 01:39:13 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_173.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:13 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { x: 173.0 } max: { x: 292.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:13 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 173.0 }, max: { x: 292.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_173.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:13 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:13 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97901bb51f6302c92cdef
m30001| Thu Jun 14 01:39:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:13-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652353733), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 173.0 }, max: { x: 292.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:13 [conn4] moveChunk request accepted at version 4|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:13 [conn4] moveChunk number of documents: 119
m30000| Thu Jun 14 01:39:13 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 173.0 } -> { x: 292.0 }
m30001| Thu Jun 14 01:39:14 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 173.0 }, max: { x: 292.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 119, clonedBytes: 1195117, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:14 [conn4] moveChunk setting version to: 5|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 173.0 } -> { x: 292.0 }
m30000| Thu Jun 14 01:39:14 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:14-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652354745), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 173.0 }, max: { x: 292.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 24, step4 of 5: 0, step5 of 5: 986 } }
m30001| Thu Jun 14 01:39:14 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 173.0 }, max: { x: 292.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 119, clonedBytes: 1195117, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:14 [conn4] moveChunk updating self version to: 5|1||4fd978ebf59a384fa81d5cff through { x: 292.0 } -> { x: 401.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:14-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652354749), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 173.0 }, max: { x: 292.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:14 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:14 [conn4] moveChunk deleted: 119
m30001| Thu Jun 14 01:39:14 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:14-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652354761), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 173.0 }, max: { x: 292.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 11 } }
m30001| Thu Jun 14 01:39:14 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 173.0 }, max: { x: 292.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_173.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:36608 w:27445 reslen:37 1029ms
m30999| Thu Jun 14 01:39:14 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:14 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 4|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:14 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 5|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:14 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 5|1||4fd978ebf59a384fa81d5cff based on: 4|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:14 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 4, "shard0001" : 16 }
diff: 12
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:16 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 4
shard0001 16
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0001 Timestamp(5000, 1)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(1000, 11)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:19 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:19 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:19 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97907f59a384fa81d5d03" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97901f59a384fa81d5d02" } }
m30999| Thu Jun 14 01:39:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97907f59a384fa81d5d03
m30999| Thu Jun 14 01:39:19 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:19 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:19 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:19 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:19 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:19 [Balancer] shard0000
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:19 [Balancer] shard0001
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] ----
m30999| Thu Jun 14 01:39:19 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:19 [Balancer] donor : 16 chunks on shard0001
m30999| Thu Jun 14 01:39:19 [Balancer] receiver : 4 chunks on shard0000
m30999| Thu Jun 14 01:39:19 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_292.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:19 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 5|1||000000000000000000000000 min: { x: 292.0 } max: { x: 401.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:19 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 292.0 }, max: { x: 401.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_292.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:19 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:19 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97907bb51f6302c92cdf0
m30001| Thu Jun 14 01:39:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:19-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652359769), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 292.0 }, max: { x: 401.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:19 [conn4] moveChunk request accepted at version 5|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:19 [conn4] moveChunk number of documents: 109
m30000| Thu Jun 14 01:39:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 292.0 } -> { x: 401.0 }
m30001| Thu Jun 14 01:39:20 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 292.0 }, max: { x: 401.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 109, clonedBytes: 1094687, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:20 [conn4] moveChunk setting version to: 6|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 292.0 } -> { x: 401.0 }
m30000| Thu Jun 14 01:39:20 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:20-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652360785), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 292.0 }, max: { x: 401.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 18, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:39:20 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 292.0 }, max: { x: 401.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 109, clonedBytes: 1094687, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:20 [conn4] moveChunk updating self version to: 6|1||4fd978ebf59a384fa81d5cff through { x: 401.0 } -> { x: 500.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:20-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652360790), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 292.0 }, max: { x: 401.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:20 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:20 [conn4] moveChunk deleted: 109
m30001| Thu Jun 14 01:39:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:20-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652360802), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 292.0 }, max: { x: 401.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 16, step6 of 6: 10 } }
m30001| Thu Jun 14 01:39:20 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 292.0 }, max: { x: 401.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_292.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:36863 w:37256 reslen:37 1033ms
m30999| Thu Jun 14 01:39:20 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:20 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 5|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:20 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 6|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:20 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 6|1||4fd978ebf59a384fa81d5cff based on: 5|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:20 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:21 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:39:21 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652331:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 5, "shard0001" : 15 }
diff: 10
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:21 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 5
shard0001 15
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0001 Timestamp(6000, 1)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(1000, 13)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:25 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:25 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:25 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9790df59a384fa81d5d04" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97907f59a384fa81d5d03" } }
m30999| Thu Jun 14 01:39:25 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd9790df59a384fa81d5d04
m30999| Thu Jun 14 01:39:25 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:25 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:25 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:25 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:25 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:25 [Balancer] shard0000
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:25 [Balancer] shard0001
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] ----
m30999| Thu Jun 14 01:39:25 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:25 [Balancer] donor : 15 chunks on shard0001
m30999| Thu Jun 14 01:39:25 [Balancer] receiver : 5 chunks on shard0000
m30999| Thu Jun 14 01:39:25 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_401.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:25 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 6|1||000000000000000000000000 min: { x: 401.0 } max: { x: 500.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:25 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 401.0 }, max: { x: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_401.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:25 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:25 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd9790dbb51f6302c92cdf1
m30001| Thu Jun 14 01:39:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:25-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652365810), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 401.0 }, max: { x: 500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:25 [conn4] moveChunk request accepted at version 6|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:25 [conn4] moveChunk number of documents: 99
m30000| Thu Jun 14 01:39:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 401.0 } -> { x: 500.0 }
m30001| Thu Jun 14 01:39:26 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 401.0 }, max: { x: 500.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 99, clonedBytes: 994257, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:26 [conn4] moveChunk setting version to: 7|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 401.0 } -> { x: 500.0 }
m30000| Thu Jun 14 01:39:26 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:26-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652366817), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 401.0 }, max: { x: 500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 989 } }
m30001| Thu Jun 14 01:39:26 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 401.0 }, max: { x: 500.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 99, clonedBytes: 994257, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:26 [conn4] moveChunk updating self version to: 7|1||4fd978ebf59a384fa81d5cff through { x: 500.0 } -> { x: 563.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:26-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652366822), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 401.0 }, max: { x: 500.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:26 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:26 [conn4] moveChunk deleted: 99
m30001| Thu Jun 14 01:39:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:26-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652366832), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 401.0 }, max: { x: 500.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 9 } }
m30001| Thu Jun 14 01:39:26 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 401.0 }, max: { x: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_401.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:37105 w:46007 reslen:37 1023ms
m30999| Thu Jun 14 01:39:26 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:26 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 6|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:26 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 7|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:26 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 7|1||4fd978ebf59a384fa81d5cff based on: 6|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:26 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:26 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 6, "shard0001" : 14 }
diff: 8
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:26 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 6
shard0001 14
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 563 } on : shard0001 Timestamp(7000, 1)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:31 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:31 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:31 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97913f59a384fa81d5d05" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd9790df59a384fa81d5d04" } }
m30999| Thu Jun 14 01:39:31 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97913f59a384fa81d5d05
m30999| Thu Jun 14 01:39:31 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:31 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:31 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:31 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:31 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:31 [Balancer] shard0000
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:31 [Balancer] shard0001
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] ----
m30999| Thu Jun 14 01:39:31 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:31 [Balancer] donor : 14 chunks on shard0001
m30999| Thu Jun 14 01:39:31 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:39:31 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_500.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:31 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 7|1||000000000000000000000000 min: { x: 500.0 } max: { x: 563.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:31 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 500.0 }, max: { x: 563.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97913bb51f6302c92cdf2
m30001| Thu Jun 14 01:39:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:31-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652371838), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 563.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:31 [conn4] moveChunk request accepted at version 7|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:31 [conn4] warning: can't move chunk of size (approximately) 5654772 because maximum size allowed to move is 1048576 ns: test.foo { x: 500.0 } -> { x: 563.0 }
m30001| Thu Jun 14 01:39:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:31-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652371841), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 563.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:39:31 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 5654772, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:39:31 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 5654772, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { x: 500.0 } max: { x: 500.0 }
m30999| Thu Jun 14 01:39:31 [Balancer] forcing a split because migrate failed for size reasons
m30001| Thu Jun 14 01:39:31 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : 563.0 }
m30001| Thu Jun 14 01:39:31 [conn4] splitVector doing another cycle because of force, keyCount now: 281
m30001| Thu Jun 14 01:39:31 [conn4] warning: chunk is larger than 20088000 bytes because of key { x: 500.0 }
m30001| Thu Jun 14 01:39:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 500.0 }, max: { x: 563.0 }, from: "shard0001", splitKeys: [ { x: 501.0 } ], shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97913bb51f6302c92cdf3
m30001| Thu Jun 14 01:39:31 [conn4] splitChunk accepted at version 7|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:31-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652371845), what: "split", ns: "test.foo", details: { before: { min: { x: 500.0 }, max: { x: 563.0 }, lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 500.0 }, max: { x: 501.0 }, lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, right: { min: { x: 501.0 }, max: { x: 563.0 }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') } } }
m30001| Thu Jun 14 01:39:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30999| Thu Jun 14 01:39:31 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 7|1||4fd978ebf59a384fa81d5cff and 20 chunks
m30999| Thu Jun 14 01:39:31 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 7|3||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:31 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 7|3||4fd978ebf59a384fa81d5cff based on: 7|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:31 [Balancer] forced split results: { ok: 1.0 }
m30999| Thu Jun 14 01:39:31 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:31 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 6, "shard0001" : 15 }
diff: 9
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:31 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 6
shard0001 15
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(7000, 2)
{ "x" : 501 } -->> { "x" : 563 } on : shard0001 Timestamp(7000, 3)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 6, "shard0001" : 15 }
diff: 9
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:36 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 6
shard0001 15
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(7000, 2)
{ "x" : 501 } -->> { "x" : 563 } on : shard0001 Timestamp(7000, 3)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:41 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:41 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9791df59a384fa81d5d06" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97913f59a384fa81d5d05" } }
m30999| Thu Jun 14 01:39:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd9791df59a384fa81d5d06
m30999| Thu Jun 14 01:39:41 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:41 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:41 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:41 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:41 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:41 [Balancer] shard0000
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:41 [Balancer] shard0001
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] ----
m30999| Thu Jun 14 01:39:41 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:41 [Balancer] donor : 15 chunks on shard0001
m30999| Thu Jun 14 01:39:41 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:39:41 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_500.0", lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:41 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 7|2||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:41 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 500.0 }, max: { x: 501.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:41 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:41 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd9791dbb51f6302c92cdf4
m30001| Thu Jun 14 01:39:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:41-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652381855), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:41 [conn4] moveChunk request accepted at version 7|3||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:41 [conn4] warning: can't move chunk of size (approximately) 5032044 because maximum size allowed to move is 1048576 ns: test.foo { x: 500.0 } -> { x: 501.0 }
m30001| Thu Jun 14 01:39:41 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:41-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652381857), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:39:41 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:39:41 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { x: 500.0 } max: { x: 500.0 }
m30999| Thu Jun 14 01:39:41 [Balancer] forcing a split because migrate failed for size reasons
m30001| Thu Jun 14 01:39:41 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : 501.0 }
m30001| Thu Jun 14 01:39:41 [conn4] splitVector doing another cycle because of force, keyCount now: 250
m30001| Thu Jun 14 01:39:41 [conn4] warning: chunk is larger than 20088000 bytes because of key { x: 500.0 }
m30999| Thu Jun 14 01:39:41 [Balancer] want to split chunk, but can't find split point chunk ns:test.foo at: shard0001:localhost:30001 lastmod: 7|2||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 } got: <empty>
m30999| Thu Jun 14 01:39:41 [Balancer] forced split results: {}
m30999| Thu Jun 14 01:39:41 [Balancer] marking chunk as jumbo: ns:test.foo at: shard0001:localhost:30001 lastmod: 7|2||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }
m30999| Thu Jun 14 01:39:41 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 6, "shard0001" : 15 }
diff: 9
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:41 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 6
shard0001 15
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(7000, 2) jumbo
{ "x" : 501 } -->> { "x" : 563 } on : shard0001 Timestamp(7000, 3)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:46 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:46 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:46 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97922f59a384fa81d5d07" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd9791df59a384fa81d5d06" } }
m30999| Thu Jun 14 01:39:46 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97922f59a384fa81d5d07
m30999| Thu Jun 14 01:39:46 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:46 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:46 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:46 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:46 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:46 [Balancer] shard0000
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:46 [Balancer] shard0001
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] ----
m30999| Thu Jun 14 01:39:46 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:46 [Balancer] donor : 14 chunks on shard0001
m30999| Thu Jun 14 01:39:46 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:39:46 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_501.0", lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:46 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 7|3||000000000000000000000000 min: { x: 501.0 } max: { x: 563.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:46 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 501.0 }, max: { x: 563.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_501.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:46 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:46 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97922bb51f6302c92cdf5
m30001| Thu Jun 14 01:39:46 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:46-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652386878), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 501.0 }, max: { x: 563.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:46 [conn4] moveChunk request accepted at version 7|3||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:46 [conn4] moveChunk number of documents: 62
m30000| Thu Jun 14 01:39:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 501.0 } -> { x: 563.0 }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 6, "shard0001" : 15 }
diff: 9
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:46 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 6
shard0001 15
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(7000, 2) jumbo
{ "x" : 501 } -->> { "x" : 563 } on : shard0001 Timestamp(7000, 3)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30001| Thu Jun 14 01:39:47 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 501.0 }, max: { x: 563.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 62, clonedBytes: 622666, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:47 [conn4] moveChunk setting version to: 8|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:47 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 501.0 } -> { x: 563.0 }
m30000| Thu Jun 14 01:39:47 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:47-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652387891), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 501.0 }, max: { x: 563.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 1000 } }
m30001| Thu Jun 14 01:39:47 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 501.0 }, max: { x: 563.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 62, clonedBytes: 622666, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:47 [conn4] moveChunk updating self version to: 8|1||4fd978ebf59a384fa81d5cff through { x: 500.0 } -> { x: 501.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:47 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:47-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652387895), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 501.0 }, max: { x: 563.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:47 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:47 [conn4] moveChunk deleted: 62
m30001| Thu Jun 14 01:39:47 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:47 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:47-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652387902), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 501.0 }, max: { x: 563.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 6 } }
m30001| Thu Jun 14 01:39:47 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 501.0 }, max: { x: 563.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_501.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:42416 w:51872 reslen:37 1024ms
m30999| Thu Jun 14 01:39:47 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:47 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 7|3||4fd978ebf59a384fa81d5cff and 21 chunks
m30999| Thu Jun 14 01:39:47 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 8|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:47 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 8|1||4fd978ebf59a384fa81d5cff based on: 7|3||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:47 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:47 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30000| Thu Jun 14 01:39:50 [clientcursormon] mem (MB) res:56 virt:203 mapped:64
m30001| Thu Jun 14 01:39:50 [clientcursormon] mem (MB) res:57 virt:197 mapped:64
m30999| Thu Jun 14 01:39:51 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:39:51 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652331:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 7, "shard0001" : 14 }
diff: 7
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:51 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:52 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 7
shard0001 14
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(8000, 1)
{ "x" : 501 } -->> { "x" : 563 } on : shard0000 Timestamp(8000, 0)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:52 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:52 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:52 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97928f59a384fa81d5d08" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97922f59a384fa81d5d07" } }
m30999| Thu Jun 14 01:39:52 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97928f59a384fa81d5d08
m30999| Thu Jun 14 01:39:52 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:52 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:52 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:52 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:52 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:52 [Balancer] shard0000
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 8000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:52 [Balancer] shard0001
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] ----
m30999| Thu Jun 14 01:39:52 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:52 [Balancer] donor : 14 chunks on shard0001
m30999| Thu Jun 14 01:39:52 [Balancer] receiver : 7 chunks on shard0000
m30999| Thu Jun 14 01:39:52 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_500.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:52 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 8|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:52 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 500.0 }, max: { x: 501.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:52 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97928bb51f6302c92cdf6
m30001| Thu Jun 14 01:39:52 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:52-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652392911), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:52 [conn4] moveChunk request accepted at version 8|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:52 [conn4] warning: can't move chunk of size (approximately) 5032044 because maximum size allowed to move is 1048576 ns: test.foo { x: 500.0 } -> { x: 501.0 }
m30001| Thu Jun 14 01:39:52 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:52 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:52-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652392913), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:39:52 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:39:52 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { x: 500.0 } max: { x: 500.0 }
m30999| Thu Jun 14 01:39:52 [Balancer] forcing a split because migrate failed for size reasons
m30001| Thu Jun 14 01:39:52 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : 501.0 }
m30001| Thu Jun 14 01:39:52 [conn4] splitVector doing another cycle because of force, keyCount now: 250
m30001| Thu Jun 14 01:39:52 [conn4] warning: chunk is larger than 19465272 bytes because of key { x: 500.0 }
m30999| Thu Jun 14 01:39:52 [Balancer] want to split chunk, but can't find split point chunk ns:test.foo at: shard0001:localhost:30001 lastmod: 8|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 } got: <empty>
m30999| Thu Jun 14 01:39:52 [Balancer] forced split results: {}
m30999| Thu Jun 14 01:39:52 [Balancer] marking chunk as jumbo: ns:test.foo at: shard0001:localhost:30001 lastmod: 8|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }
m30999| Thu Jun 14 01:39:52 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:52 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 7, "shard0001" : 14 }
diff: 7
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:39:57 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 7
shard0001 14
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(8000, 1) jumbo
{ "x" : 501 } -->> { "x" : 563 } on : shard0000 Timestamp(8000, 0)
{ "x" : 563 } -->> { "x" : 675 } on : shard0001 Timestamp(1000, 15)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:39:57 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:39:57 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:39:57 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9792df59a384fa81d5d09" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97928f59a384fa81d5d08" } }
m30999| Thu Jun 14 01:39:57 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd9792df59a384fa81d5d09
m30999| Thu Jun 14 01:39:57 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:39:57 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:39:57 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:57 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:39:57 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:39:57 [Balancer] shard0000
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 8000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:39:57 [Balancer] shard0001
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] ----
m30999| Thu Jun 14 01:39:57 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:39:57 [Balancer] donor : 13 chunks on shard0001
m30999| Thu Jun 14 01:39:57 [Balancer] receiver : 7 chunks on shard0000
m30999| Thu Jun 14 01:39:57 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_563.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:39:57 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { x: 563.0 } max: { x: 675.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:39:57 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 563.0 }, max: { x: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_563.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:39:57 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:39:57 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd9792dbb51f6302c92cdf7
m30001| Thu Jun 14 01:39:57 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:57-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652397924), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 563.0 }, max: { x: 675.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:57 [conn4] moveChunk request accepted at version 8|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:39:57 [conn4] moveChunk number of documents: 112
m30000| Thu Jun 14 01:39:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 563.0 } -> { x: 675.0 }
m30001| Thu Jun 14 01:39:58 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 563.0 }, max: { x: 675.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 112, clonedBytes: 1124816, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:39:58 [conn4] moveChunk setting version to: 9|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:39:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 563.0 } -> { x: 675.0 }
m30000| Thu Jun 14 01:39:58 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:58-7", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652398939), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 563.0 }, max: { x: 675.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 19, step4 of 5: 0, step5 of 5: 995 } }
m30001| Thu Jun 14 01:39:58 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 563.0 }, max: { x: 675.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 112, clonedBytes: 1124816, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:39:58 [conn4] moveChunk updating self version to: 9|1||4fd978ebf59a384fa81d5cff through { x: 500.0 } -> { x: 501.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:39:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:58-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652398944), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 563.0 }, max: { x: 675.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:39:58 [conn4] doing delete inline
m30001| Thu Jun 14 01:39:58 [conn4] moveChunk deleted: 112
m30001| Thu Jun 14 01:39:58 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:39:58 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:39:58-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652398955), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 563.0 }, max: { x: 675.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 16, step6 of 6: 10 } }
m30001| Thu Jun 14 01:39:58 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 563.0 }, max: { x: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_563.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:45125 w:61932 reslen:37 1032ms
m30999| Thu Jun 14 01:39:58 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:39:58 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 8|1||4fd978ebf59a384fa81d5cff and 21 chunks
m30999| Thu Jun 14 01:39:58 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 9|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:58 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 9|1||4fd978ebf59a384fa81d5cff based on: 8|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:39:58 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:39:58 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 8, "shard0001" : 13 }
diff: 5
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:02 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 8
shard0001 13
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(9000, 1)
{ "x" : 501 } -->> { "x" : 563 } on : shard0000 Timestamp(8000, 0)
{ "x" : 563 } -->> { "x" : 675 } on : shard0000 Timestamp(9000, 0)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:40:03 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:40:03 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:40:03 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97933f59a384fa81d5d0a" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd9792df59a384fa81d5d09" } }
m30999| Thu Jun 14 01:40:03 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97933f59a384fa81d5d0a
m30999| Thu Jun 14 01:40:03 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:40:03 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:40:03 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:40:03 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:40:03 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:40:03 [Balancer] shard0000
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 8000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 9000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:03 [Balancer] shard0001
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_500.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] ----
m30999| Thu Jun 14 01:40:03 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:40:03 [Balancer] donor : 13 chunks on shard0001
m30999| Thu Jun 14 01:40:03 [Balancer] receiver : 8 chunks on shard0000
m30999| Thu Jun 14 01:40:03 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_500.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 500.0 }, max: { x: 501.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:03 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 9|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:03 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 500.0 }, max: { x: 501.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_500.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:03 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:03 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97933bb51f6302c92cdf8
m30001| Thu Jun 14 01:40:03 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:03-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652403964), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:03 [conn4] moveChunk request accepted at version 9|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:40:03 [conn4] warning: can't move chunk of size (approximately) 5032044 because maximum size allowed to move is 1048576 ns: test.foo { x: 500.0 } -> { x: 501.0 }
m30001| Thu Jun 14 01:40:03 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:40:03 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:03-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652403966), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 500.0 }, max: { x: 501.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:40:03 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:40:03 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 5032044, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { x: 500.0 } max: { x: 500.0 }
m30999| Thu Jun 14 01:40:03 [Balancer] forcing a split because migrate failed for size reasons
m30001| Thu Jun 14 01:40:03 [conn4] request split points lookup for chunk test.foo { : 500.0 } -->> { : 501.0 }
m30001| Thu Jun 14 01:40:03 [conn4] splitVector doing another cycle because of force, keyCount now: 250
m30001| Thu Jun 14 01:40:03 [conn4] warning: chunk is larger than 18340344 bytes because of key { x: 500.0 }
m30999| Thu Jun 14 01:40:03 [Balancer] want to split chunk, but can't find split point chunk ns:test.foo at: shard0001:localhost:30001 lastmod: 9|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 } got: <empty>
m30999| Thu Jun 14 01:40:03 [Balancer] forced split results: {}
m30999| Thu Jun 14 01:40:03 [Balancer] marking chunk as jumbo: ns:test.foo at: shard0001:localhost:30001 lastmod: 9|1||000000000000000000000000 min: { x: 500.0 } max: { x: 501.0 }
m30999| Thu Jun 14 01:40:03 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:40:03 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 8, "shard0001" : 13 }
diff: 5
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:07 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 8
shard0001 13
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(9000, 1) jumbo
{ "x" : 501 } -->> { "x" : 563 } on : shard0000 Timestamp(8000, 0)
{ "x" : 563 } -->> { "x" : 675 } on : shard0000 Timestamp(9000, 0)
{ "x" : 675 } -->> { "x" : 787 } on : shard0001 Timestamp(1000, 17)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:40:08 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:40:08 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652331:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:40:08 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97938f59a384fa81d5d0b" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97933f59a384fa81d5d0a" } }
m30999| Thu Jun 14 01:40:08 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' acquired, ts : 4fd97938f59a384fa81d5d0b
m30999| Thu Jun 14 01:40:08 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:40:08 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:40:08 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:40:08 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:40:08 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:40:08 [Balancer] shard0000
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 0.0 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 53.0 }, max: { x: 173.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_173.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 173.0 }, max: { x: 292.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_292.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 292.0 }, max: { x: 401.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_401.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 401.0 }, max: { x: 500.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_501.0", lastmod: Timestamp 8000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 501.0 }, max: { x: 563.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_563.0", lastmod: Timestamp 9000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 563.0 }, max: { x: 675.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:40:08 [Balancer] shard0001
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_787.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 787.0 }, max: { x: 902.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_902.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 902.0 }, max: { x: 1007.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1007.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1007.0 }, max: { x: 1124.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1124.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1124.0 }, max: { x: 1243.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1243.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1243.0 }, max: { x: 1357.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1357.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1357.0 }, max: { x: 1468.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1468.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1468.0 }, max: { x: 1583.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1583.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1583.0 }, max: { x: 1703.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1703.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1703.0 }, max: { x: 1820.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1820.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1820.0 }, max: { x: 1935.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] { _id: "test.foo-x_1935.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 1935.0 }, max: { x: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] ----
m30999| Thu Jun 14 01:40:08 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:40:08 [Balancer] donor : 12 chunks on shard0001
m30999| Thu Jun 14 01:40:08 [Balancer] receiver : 8 chunks on shard0000
m30999| Thu Jun 14 01:40:08 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_675.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: 675.0 }, max: { x: 787.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:40:08 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|17||000000000000000000000000 min: { x: 675.0 } max: { x: 787.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:08 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 675.0 }, max: { x: 787.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_675.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:08 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:08 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' acquired, ts : 4fd97938bb51f6302c92cdf9
m30001| Thu Jun 14 01:40:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:08-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652408977), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 675.0 }, max: { x: 787.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:08 [conn4] moveChunk request accepted at version 9|1||4fd978ebf59a384fa81d5cff
m30001| Thu Jun 14 01:40:08 [conn4] moveChunk number of documents: 112
m30000| Thu Jun 14 01:40:08 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 675.0 } -> { x: 787.0 }
m30001| Thu Jun 14 01:40:09 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 675.0 }, max: { x: 787.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 112, clonedBytes: 1124816, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:09 [conn4] moveChunk setting version to: 10|0||4fd978ebf59a384fa81d5cff
m30000| Thu Jun 14 01:40:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 675.0 } -> { x: 787.0 }
m30000| Thu Jun 14 01:40:09 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:09-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652409984), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 675.0 }, max: { x: 787.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 19, step4 of 5: 0, step5 of 5: 986 } }
m30001| Thu Jun 14 01:40:09 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 675.0 }, max: { x: 787.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 112, clonedBytes: 1124816, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:09 [conn4] moveChunk updating self version to: 10|1||4fd978ebf59a384fa81d5cff through { x: 500.0 } -> { x: 501.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:40:09 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:09-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652409989), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 675.0 }, max: { x: 787.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:09 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:10 [conn4] moveChunk deleted: 112
m30001| Thu Jun 14 01:40:10 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652332:1880705399' unlocked.
m30001| Thu Jun 14 01:40:10 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:10-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44585", time: new Date(1339652410000), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 675.0 }, max: { x: 787.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 8, step6 of 6: 11 } }
m30001| Thu Jun 14 01:40:10 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 675.0 }, max: { x: 787.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_675.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:47804 w:72526 reslen:37 1024ms
m30999| Thu Jun 14 01:40:10 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:40:10 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 9|1||4fd978ebf59a384fa81d5cff and 21 chunks
m30999| Thu Jun 14 01:40:10 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 10|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:40:10 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 10|1||4fd978ebf59a384fa81d5cff based on: 9|1||4fd978ebf59a384fa81d5cff
m30999| Thu Jun 14 01:40:10 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:40:10 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652331:1804289383' unlocked.
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{ "shard0000" : 9, "shard0001" : 12 }
diff: 3
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo", lastmod: new Date(1339652331), dropped: false, key: { x: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000]
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:40:12 [conn] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd978ebf59a384fa81d5cff'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0000 9
shard0001 12
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0000 Timestamp(3000, 0)
{ "x" : 0 } -->> { "x" : 53 } on : shard0000 Timestamp(2000, 0)
{ "x" : 53 } -->> { "x" : 173 } on : shard0000 Timestamp(4000, 0)
{ "x" : 173 } -->> { "x" : 292 } on : shard0000 Timestamp(5000, 0)
{ "x" : 292 } -->> { "x" : 401 } on : shard0000 Timestamp(6000, 0)
{ "x" : 401 } -->> { "x" : 500 } on : shard0000 Timestamp(7000, 0)
{ "x" : 500 } -->> { "x" : 501 } on : shard0001 Timestamp(10000, 1)
{ "x" : 501 } -->> { "x" : 563 } on : shard0000 Timestamp(8000, 0)
{ "x" : 563 } -->> { "x" : 675 } on : shard0000 Timestamp(9000, 0)
{ "x" : 675 } -->> { "x" : 787 } on : shard0000 Timestamp(10000, 0)
{ "x" : 787 } -->> { "x" : 902 } on : shard0001 Timestamp(1000, 19)
{ "x" : 902 } -->> { "x" : 1007 } on : shard0001 Timestamp(1000, 21)
{ "x" : 1007 } -->> { "x" : 1124 } on : shard0001 Timestamp(1000, 23)
{ "x" : 1124 } -->> { "x" : 1243 } on : shard0001 Timestamp(1000, 25)
{ "x" : 1243 } -->> { "x" : 1357 } on : shard0001 Timestamp(1000, 27)
{ "x" : 1357 } -->> { "x" : 1468 } on : shard0001 Timestamp(1000, 29)
{ "x" : 1468 } -->> { "x" : 1583 } on : shard0001 Timestamp(1000, 31)
{ "x" : 1583 } -->> { "x" : 1703 } on : shard0001 Timestamp(1000, 33)
{ "x" : 1703 } -->> { "x" : 1820 } on : shard0001 Timestamp(1000, 35)
{ "x" : 1820 } -->> { "x" : 1935 } on : shard0001 Timestamp(1000, 37)
{ "x" : 1935 } -->> { "x" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 38)
m30999| Thu Jun 14 01:40:12 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:40:12 [conn3] end connection 127.0.0.1:56670 (10 connections now open)
m30000| Thu Jun 14 01:40:12 [conn5] end connection 127.0.0.1:56675 (9 connections now open)
m30000| Thu Jun 14 01:40:12 [conn6] end connection 127.0.0.1:56676 (8 connections now open)
m30001| Thu Jun 14 01:40:12 [conn3] end connection 127.0.0.1:44583 (4 connections now open)
m30001| Thu Jun 14 01:40:12 [conn4] end connection 127.0.0.1:44585 (3 connections now open)
m30000| Thu Jun 14 01:40:12 [conn7] end connection 127.0.0.1:56679 (7 connections now open)
m30000| Thu Jun 14 01:40:12 [conn11] end connection 127.0.0.1:56686 (7 connections now open)
Thu Jun 14 01:40:13 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:40:13 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:40:13 [interruptThread] now exiting
m30000| Thu Jun 14 01:40:13 dbexit:
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:40:13 [interruptThread] closing listening socket: 34
m30000| Thu Jun 14 01:40:13 [interruptThread] closing listening socket: 35
m30000| Thu Jun 14 01:40:13 [interruptThread] closing listening socket: 36
m30000| Thu Jun 14 01:40:13 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:40:13 [conn10] end connection 127.0.0.1:56685 (5 connections now open)
m30001| Thu Jun 14 01:40:13 [conn5] end connection 127.0.0.1:44587 (2 connections now open)
m30000| Thu Jun 14 01:40:13 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:40:13 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:40:13 dbexit: really exiting now
Thu Jun 14 01:40:14 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:40:14 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:40:14 [interruptThread] now exiting
m30001| Thu Jun 14 01:40:14 dbexit:
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:40:14 [interruptThread] closing listening socket: 37
m30001| Thu Jun 14 01:40:14 [interruptThread] closing listening socket: 38
m30001| Thu Jun 14 01:40:14 [interruptThread] closing listening socket: 39
m30001| Thu Jun 14 01:40:14 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:40:14 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:40:14 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:40:14 dbexit: really exiting now
Thu Jun 14 01:40:15 shell: stopped mongo program on port 30001
*** ShardingTest jump1 completed successfully in 84.462 seconds ***
84546.314955ms
Thu Jun 14 01:40:15 [initandlisten] connection accepted from 127.0.0.1:35155 #39 (26 connections now open)
*******************************************
Test : key_many.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/key_many.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/key_many.js";TestData.testFile = "key_many.js";TestData.testName = "key_many";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:40:15 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/key_many0'
Thu Jun 14 01:40:15 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/key_many0
m30000| Thu Jun 14 01:40:15
m30000| Thu Jun 14 01:40:15 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:40:15
m30000| Thu Jun 14 01:40:15 [initandlisten] MongoDB starting : pid=26316 port=30000 dbpath=/data/db/key_many0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:40:15 [initandlisten]
m30000| Thu Jun 14 01:40:15 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:40:15 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:40:15 [initandlisten]
m30000| Thu Jun 14 01:40:15 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:40:15 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:40:15 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:40:15 [initandlisten]
m30000| Thu Jun 14 01:40:15 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:40:15 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:40:15 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:40:15 [initandlisten] options: { dbpath: "/data/db/key_many0", port: 30000 }
m30000| Thu Jun 14 01:40:15 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:40:15 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/key_many1'
m30000| Thu Jun 14 01:40:15 [initandlisten] connection accepted from 127.0.0.1:56689 #1 (1 connection now open)
Thu Jun 14 01:40:15 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/key_many1
m30001| Thu Jun 14 01:40:15
m30001| Thu Jun 14 01:40:15 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:40:15
m30001| Thu Jun 14 01:40:15 [initandlisten] MongoDB starting : pid=26329 port=30001 dbpath=/data/db/key_many1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:40:15 [initandlisten]
m30001| Thu Jun 14 01:40:15 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:40:15 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:40:15 [initandlisten]
m30001| Thu Jun 14 01:40:15 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:40:15 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:40:15 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:40:15 [initandlisten]
m30001| Thu Jun 14 01:40:15 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:40:15 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:40:15 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:40:15 [initandlisten] options: { dbpath: "/data/db/key_many1", port: 30001 }
m30001| Thu Jun 14 01:40:15 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:40:15 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30000| Thu Jun 14 01:40:15 [initandlisten] connection accepted from 127.0.0.1:56692 #2 (2 connections now open)
m30001| Thu Jun 14 01:40:15 [initandlisten] connection accepted from 127.0.0.1:44594 #1 (1 connection now open)
ShardingTest key_many :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:40:15 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30000| Thu Jun 14 01:40:15 [FileAllocator] allocating new datafile /data/db/key_many0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:40:15 [FileAllocator] creating directory /data/db/key_many0/_tmp
m30999| Thu Jun 14 01:40:15 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:40:15 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26344 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:40:15 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:40:15 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:40:15 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:40:15 [initandlisten] connection accepted from 127.0.0.1:56694 #3 (3 connections now open)
m30000| Thu Jun 14 01:40:15 [FileAllocator] done allocating datafile /data/db/key_many0/config.ns, size: 16MB, took 0.318 secs
m30000| Thu Jun 14 01:40:15 [FileAllocator] allocating new datafile /data/db/key_many0/config.0, filling with zeroes...
m30999| Thu Jun 14 01:40:16 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:40:16 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:40:16 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:40:16 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:40:16 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:40:16
m30999| Thu Jun 14 01:40:16 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:40:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652416:1804289383' acquired, ts : 4fd979408a26dcf9048e3fbc
m30999| Thu Jun 14 01:40:16 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652416:1804289383' unlocked.
m30999| Thu Jun 14 01:40:16 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652416:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:40:16 [FileAllocator] done allocating datafile /data/db/key_many0/config.0, size: 16MB, took 0.321 secs
m30000| Thu Jun 14 01:40:16 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn2] insert config.settings keyUpdates:0 locks(micros) w:659243 659ms
m30000| Thu Jun 14 01:40:16 [FileAllocator] allocating new datafile /data/db/key_many0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:56698 #4 (4 connections now open)
m30000| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:56699 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:16 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:40:16 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:40:16 [conn4] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:56700 #6 (6 connections now open)
m30000| Thu Jun 14 01:40:16 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:40:16 [mongosMain] connection accepted from 127.0.0.1:53582 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:40:16 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:40:16 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:40:16 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:16 [FileAllocator] done allocating datafile /data/db/key_many0/config.1, size: 32MB, took 0.568 secs
m30000| Thu Jun 14 01:40:16 [conn5] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:5 r:257 w:1612 reslen:177 416ms
m30999| Thu Jun 14 01:40:16 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:44605 #2 (2 connections now open)
m30999| Thu Jun 14 01:40:16 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30001| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:44607 #3 (3 connections now open)
m30001| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:44608 #4 (4 connections now open)
m30000| Thu Jun 14 01:40:16 [initandlisten] connection accepted from 127.0.0.1:56703 #7 (7 connections now open)
m30999| Thu Jun 14 01:40:16 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979408a26dcf9048e3fbb
m30999| Thu Jun 14 01:40:16 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd979408a26dcf9048e3fbb
m30999| Thu Jun 14 01:40:16 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:40:16 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:40:16 [conn] enabling sharding on: test
#### Now Testing string ####
m30999| Thu Jun 14 01:40:16 [conn] CMD: shardcollection: { shardcollection: "test.foo_string", key: { k: 1.0 } }
m30999| Thu Jun 14 01:40:16 [conn] enable sharding on: test.foo_string with shard key: { k: 1.0 }
m30999| Thu Jun 14 01:40:16 [conn] going to create 1 chunk(s) for: test.foo_string using new epoch 4fd979408a26dcf9048e3fbd
m30999| Thu Jun 14 01:40:16 [conn] ChunkManager: time to load chunks for test.foo_string: 0ms sequenceNumber: 2 version: 1|0||4fd979408a26dcf9048e3fbd based on: (empty)
m30999| Thu Jun 14 01:40:16 [conn] resetting shard version of test.foo_string on localhost:30000, version is zero
m30000| Thu Jun 14 01:40:16 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:40:16 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:16 [FileAllocator] allocating new datafile /data/db/key_many1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:16 [FileAllocator] creating directory /data/db/key_many1/_tmp
m30001| Thu Jun 14 01:40:17 [FileAllocator] done allocating datafile /data/db/key_many1/test.ns, size: 16MB, took 0.472 secs
m30001| Thu Jun 14 01:40:17 [FileAllocator] allocating new datafile /data/db/key_many1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:40:17 [FileAllocator] done allocating datafile /data/db/key_many1/test.0, size: 16MB, took 0.342 secs
m30001| Thu Jun 14 01:40:17 [FileAllocator] allocating new datafile /data/db/key_many1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:40:17 [conn4] build index test.foo_string { _id: 1 }
m30001| Thu Jun 14 01:40:17 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:17 [conn4] info: creating collection test.foo_string on add index
m30001| Thu Jun 14 01:40:17 [conn4] build index test.foo_string { k: 1.0 }
m30001| Thu Jun 14 01:40:17 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:17 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:68 r:240 w:828280 828ms
m30001| Thu Jun 14 01:40:17 [conn3] command admin.$cmd command: { setShardVersion: "test.foo_string", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979408a26dcf9048e3fbd'), serverID: ObjectId('4fd979408a26dcf9048e3fbb'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:60 reslen:187 828ms
m30001| Thu Jun 14 01:40:17 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:40:17 [initandlisten] connection accepted from 127.0.0.1:56706 #8 (8 connections now open)
m30999| Thu Jun 14 01:40:17 [conn] splitting: test.foo_string shard: ns:test.foo_string at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { k: MinKey } max: { k: MaxKey }
m30999| Thu Jun 14 01:40:17 [conn] ChunkManager: time to load chunks for test.foo_string: 0ms sequenceNumber: 3 version: 1|2||4fd979408a26dcf9048e3fbd based on: 1|0||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] request split points lookup for chunk test.foo_string { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:17 [conn4] received splitChunk request: { splitChunk: "test.foo_string", keyPattern: { k: 1.0 }, min: { k: MinKey }, max: { k: MaxKey }, from: "shard0001", splitKeys: [ { k: "allan" } ], shardId: "test.foo_string-k_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:17 [conn4] created new distributed lock for test.foo_string on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:17 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652417:126826176 (sleeping for 30000ms)
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979417271f2fe6d09db19
m30001| Thu Jun 14 01:40:17 [conn4] splitChunk accepted at version 1|0||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:17-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652417696), what: "split", ns: "test.foo_string", details: { before: { min: { k: MinKey }, max: { k: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { k: MinKey }, max: { k: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') }, right: { min: { k: "allan" }, max: { k: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') } } }
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30000| Thu Jun 14 01:40:17 [initandlisten] connection accepted from 127.0.0.1:56707 #9 (9 connections now open)
m30999| Thu Jun 14 01:40:17 [conn] splitting: test.foo_string shard: ns:test.foo_string at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { k: "allan" } max: { k: MaxKey }
m30999| Thu Jun 14 01:40:17 [conn] ChunkManager: time to load chunks for test.foo_string: 0ms sequenceNumber: 4 version: 1|4||4fd979408a26dcf9048e3fbd based on: 1|2||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] request split points lookup for chunk test.foo_string { : "allan" } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:17 [conn4] received splitChunk request: { splitChunk: "test.foo_string", keyPattern: { k: 1.0 }, min: { k: "allan" }, max: { k: MaxKey }, from: "shard0001", splitKeys: [ { k: "sara" } ], shardId: "test.foo_string-k_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:17 [conn4] created new distributed lock for test.foo_string on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979417271f2fe6d09db1a
m30001| Thu Jun 14 01:40:17 [conn4] splitChunk accepted at version 1|2||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:17-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652417700), what: "split", ns: "test.foo_string", details: { before: { min: { k: "allan" }, max: { k: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { k: "allan" }, max: { k: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') }, right: { min: { k: "sara" }, max: { k: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') } } }
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:17 [conn] splitting: test.foo_string shard: ns:test.foo_string at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { k: "allan" } max: { k: "sara" }
m30999| Thu Jun 14 01:40:17 [conn] ChunkManager: time to load chunks for test.foo_string: 0ms sequenceNumber: 5 version: 1|6||4fd979408a26dcf9048e3fbd based on: 1|4||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] request split points lookup for chunk test.foo_string { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:40:17 [conn4] received splitChunk request: { splitChunk: "test.foo_string", keyPattern: { k: 1.0 }, min: { k: "allan" }, max: { k: "sara" }, from: "shard0001", splitKeys: [ { k: "joe" } ], shardId: "test.foo_string-k_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:17 [conn4] created new distributed lock for test.foo_string on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979417271f2fe6d09db1b
m30001| Thu Jun 14 01:40:17 [conn4] splitChunk accepted at version 1|4||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:17-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652417706), what: "split", ns: "test.foo_string", details: { before: { min: { k: "allan" }, max: { k: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { k: "allan" }, max: { k: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') }, right: { min: { k: "joe" }, max: { k: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979408a26dcf9048e3fbd') } } }
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:17 [conn] CMD: movechunk: { movechunk: "test.foo_string", find: { k: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:17 [conn] moving chunk ns: test.foo_string moving ( ns:test.foo_string at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { k: "allan" } max: { k: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:17 [conn4] received moveChunk request: { moveChunk: "test.foo_string", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { k: "allan" }, max: { k: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_string-k_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:17 [conn4] created new distributed lock for test.foo_string on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:17 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979417271f2fe6d09db1c
m30001| Thu Jun 14 01:40:17 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:17-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652417709), what: "moveChunk.start", ns: "test.foo_string", details: { min: { k: "allan" }, max: { k: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:17 [conn4] moveChunk request accepted at version 1|6||4fd979408a26dcf9048e3fbd
m30001| Thu Jun 14 01:40:17 [conn4] moveChunk number of documents: 3
m30001| Thu Jun 14 01:40:17 [initandlisten] connection accepted from 127.0.0.1:44611 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:17 [FileAllocator] allocating new datafile /data/db/key_many0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:18 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_string", from: "localhost:30001", min: { k: "allan" }, max: { k: "joe" }, shardKeyPattern: { k: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:18 [FileAllocator] done allocating datafile /data/db/key_many1/test.1, size: 32MB, took 1.211 secs
m30000| Thu Jun 14 01:40:18 [FileAllocator] done allocating datafile /data/db/key_many0/test.ns, size: 16MB, took 1.185 secs
m30000| Thu Jun 14 01:40:18 [FileAllocator] allocating new datafile /data/db/key_many0/test.0, filling with zeroes...
m30000| Thu Jun 14 01:40:19 [FileAllocator] done allocating datafile /data/db/key_many0/test.0, size: 16MB, took 0.303 secs
m30000| Thu Jun 14 01:40:19 [FileAllocator] allocating new datafile /data/db/key_many0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:40:19 [migrateThread] build index test.foo_string { _id: 1 }
m30000| Thu Jun 14 01:40:19 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:19 [migrateThread] info: creating collection test.foo_string on add index
m30000| Thu Jun 14 01:40:19 [migrateThread] build index test.foo_string { k: 1.0 }
m30000| Thu Jun 14 01:40:19 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_string' { k: "allan" } -> { k: "joe" }
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_string", from: "localhost:30001", min: { k: "allan" }, max: { k: "joe" }, shardKeyPattern: { k: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 103, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk setting version to: 2|0||4fd979408a26dcf9048e3fbd
m30000| Thu Jun 14 01:40:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_string' { k: "allan" } -> { k: "joe" }
m30000| Thu Jun 14 01:40:19 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652419728), what: "moveChunk.to", ns: "test.foo_string", details: { min: { k: "allan" }, max: { k: "joe" }, step1 of 5: 1499, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 518 } }
m30000| Thu Jun 14 01:40:19 [initandlisten] connection accepted from 127.0.0.1:56709 #10 (10 connections now open)
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_string", from: "localhost:30001", min: { k: "allan" }, max: { k: "joe" }, shardKeyPattern: { k: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 103, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk updating self version to: 2|1||4fd979408a26dcf9048e3fbd through { k: MinKey } -> { k: "allan" } for collection 'test.foo_string'
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419733), what: "moveChunk.commit", ns: "test.foo_string", details: { min: { k: "allan" }, max: { k: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:19 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_string/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419734), what: "moveChunk.from", ns: "test.foo_string", details: { min: { k: "allan" }, max: { k: "joe" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2007, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:19 [conn4] command admin.$cmd command: { moveChunk: "test.foo_string", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { k: "allan" }, max: { k: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_string-k_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:557 w:828614 reslen:37 2026ms
m30999| Thu Jun 14 01:40:19 [conn] ChunkManager: time to load chunks for test.foo_string: 0ms sequenceNumber: 6 version: 2|1||4fd979408a26dcf9048e3fbd based on: 1|6||4fd979408a26dcf9048e3fbd
ShardingTest test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
m30000| Thu Jun 14 01:40:19 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:19 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:19 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:19 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:19 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_string",
"count" : 6,
"numExtents" : 2,
"size" : 216,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"k_1" : 16352
},
"avgObjSize" : 36,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_string",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"k_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_string",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"k_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing double ####
m30999| Thu Jun 14 01:40:19 [conn] CMD: shardcollection: { shardcollection: "test.foo_double", key: { a: 1.0 } }
m30999| Thu Jun 14 01:40:19 [conn] enable sharding on: test.foo_double with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:40:19 [conn] going to create 1 chunk(s) for: test.foo_double using new epoch 4fd979438a26dcf9048e3fbe
m30999| Thu Jun 14 01:40:19 [conn] ChunkManager: time to load chunks for test.foo_double: 0ms sequenceNumber: 7 version: 1|0||4fd979438a26dcf9048e3fbe based on: (empty)
m30999| Thu Jun 14 01:40:19 [conn] resetting shard version of test.foo_double on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:19 [conn4] build index test.foo_double { _id: 1 }
m30001| Thu Jun 14 01:40:19 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:19 [conn4] info: creating collection test.foo_double on add index
m30001| Thu Jun 14 01:40:19 [conn4] build index test.foo_double { a: 1.0 }
m30001| Thu Jun 14 01:40:19 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:19 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:19 [conn] splitting: test.foo_double shard: ns:test.foo_double at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey }
m30001| Thu Jun 14 01:40:19 [conn4] request split points lookup for chunk test.foo_double { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:19 [conn4] received splitChunk request: { splitChunk: "test.foo_double", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 1.2 } ], shardId: "test.foo_double-a_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:19 [conn4] created new distributed lock for test.foo_double on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979437271f2fe6d09db1d
m30001| Thu Jun 14 01:40:19 [conn4] splitChunk accepted at version 1|0||4fd979438a26dcf9048e3fbe
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419765), what: "split", ns: "test.foo_double", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 1.2 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') }, right: { min: { a: 1.2 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') } } }
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:19 [conn] ChunkManager: time to load chunks for test.foo_double: 0ms sequenceNumber: 8 version: 1|2||4fd979438a26dcf9048e3fbe based on: 1|0||4fd979438a26dcf9048e3fbe
m30999| Thu Jun 14 01:40:19 [conn] splitting: test.foo_double shard: ns:test.foo_double at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 1.2 } max: { a: MaxKey }
m30001| Thu Jun 14 01:40:19 [conn4] request split points lookup for chunk test.foo_double { : 1.2 } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:19 [conn4] received splitChunk request: { splitChunk: "test.foo_double", keyPattern: { a: 1.0 }, min: { a: 1.2 }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 9.9 } ], shardId: "test.foo_double-a_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:19 [conn4] created new distributed lock for test.foo_double on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979437271f2fe6d09db1e
m30001| Thu Jun 14 01:40:19 [conn4] splitChunk accepted at version 1|2||4fd979438a26dcf9048e3fbe
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419769), what: "split", ns: "test.foo_double", details: { before: { min: { a: 1.2 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 1.2 }, max: { a: 9.9 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') }, right: { min: { a: 9.9 }, max: { a: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') } } }
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:19 [conn] ChunkManager: time to load chunks for test.foo_double: 0ms sequenceNumber: 9 version: 1|4||4fd979438a26dcf9048e3fbe based on: 1|2||4fd979438a26dcf9048e3fbe
m30999| Thu Jun 14 01:40:19 [conn] splitting: test.foo_double shard: ns:test.foo_double at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: 1.2 } max: { a: 9.9 }
m30001| Thu Jun 14 01:40:19 [conn4] request split points lookup for chunk test.foo_double { : 1.2 } -->> { : 9.9 }
m30001| Thu Jun 14 01:40:19 [conn4] received splitChunk request: { splitChunk: "test.foo_double", keyPattern: { a: 1.0 }, min: { a: 1.2 }, max: { a: 9.9 }, from: "shard0001", splitKeys: [ { a: 4.6 } ], shardId: "test.foo_double-a_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:19 [conn4] created new distributed lock for test.foo_double on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979437271f2fe6d09db1f
m30001| Thu Jun 14 01:40:19 [conn4] splitChunk accepted at version 1|4||4fd979438a26dcf9048e3fbe
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419773), what: "split", ns: "test.foo_double", details: { before: { min: { a: 1.2 }, max: { a: 9.9 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 1.2 }, max: { a: 4.6 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') }, right: { min: { a: 4.6 }, max: { a: 9.9 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979438a26dcf9048e3fbe') } } }
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:19 [conn] ChunkManager: time to load chunks for test.foo_double: 0ms sequenceNumber: 10 version: 1|6||4fd979438a26dcf9048e3fbe based on: 1|4||4fd979438a26dcf9048e3fbe
m30999| Thu Jun 14 01:40:19 [conn] CMD: movechunk: { movechunk: "test.foo_double", find: { a: 1.2 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:19 [conn] moving chunk ns: test.foo_double moving ( ns:test.foo_double at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 1.2 } max: { a: 4.6 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:19 [conn4] received moveChunk request: { moveChunk: "test.foo_double", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 1.2 }, max: { a: 4.6 }, maxChunkSizeBytes: 52428800, shardId: "test.foo_double-a_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:19 [conn4] created new distributed lock for test.foo_double on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:19 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979437271f2fe6d09db20
m30001| Thu Jun 14 01:40:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:19-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652419776), what: "moveChunk.start", ns: "test.foo_double", details: { min: { a: 1.2 }, max: { a: 4.6 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk request accepted at version 1|6||4fd979438a26dcf9048e3fbe
m30001| Thu Jun 14 01:40:19 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:19 [migrateThread] build index test.foo_double { _id: 1 }
m30000| Thu Jun 14 01:40:19 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:19 [migrateThread] info: creating collection test.foo_double on add index
m30000| Thu Jun 14 01:40:19 [migrateThread] build index test.foo_double { a: 1.0 }
m30000| Thu Jun 14 01:40:19 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_double' { a: 1.2 } -> { a: 4.6 }
m30000| Thu Jun 14 01:40:19 [FileAllocator] done allocating datafile /data/db/key_many0/test.1, size: 32MB, took 0.683 secs
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_double", from: "localhost:30001", min: { a: 1.2 }, max: { a: 4.6 }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 99, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk setting version to: 2|0||4fd979438a26dcf9048e3fbe
m30000| Thu Jun 14 01:40:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_double' { a: 1.2 } -> { a: 4.6 }
m30000| Thu Jun 14 01:40:20 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652420793), what: "moveChunk.to", ns: "test.foo_double", details: { min: { a: 1.2 }, max: { a: 4.6 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1013 } }
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_double", from: "localhost:30001", min: { a: 1.2 }, max: { a: 4.6 }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 99, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk updating self version to: 2|1||4fd979438a26dcf9048e3fbe through { a: MinKey } -> { a: 1.2 } for collection 'test.foo_double'
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420797), what: "moveChunk.commit", ns: "test.foo_double", details: { min: { a: 1.2 }, max: { a: 4.6 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:20 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_double/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420798), what: "moveChunk.from", ns: "test.foo_double", details: { min: { a: 1.2 }, max: { a: 4.6 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1004, step5 of 6: 16, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:20 [conn4] command admin.$cmd command: { moveChunk: "test.foo_double", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 1.2 }, max: { a: 4.6 }, maxChunkSizeBytes: 52428800, shardId: "test.foo_double-a_1.2", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:1064 w:829727 reslen:37 1022ms
m30999| Thu Jun 14 01:40:20 [conn] ChunkManager: time to load chunks for test.foo_double: 0ms sequenceNumber: 11 version: 2|1||4fd979438a26dcf9048e3fbe based on: 1|6||4fd979438a26dcf9048e3fbe
ShardingTest test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
m30000| Thu Jun 14 01:40:20 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:20 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:20 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:20 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:20 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_double",
"count" : 6,
"numExtents" : 2,
"size" : 216,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"a_1" : 16352
},
"avgObjSize" : 36,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_double",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_double",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing date ####
m30999| Thu Jun 14 01:40:20 [conn] CMD: shardcollection: { shardcollection: "test.foo_date", key: { a: 1.0 } }
m30999| Thu Jun 14 01:40:20 [conn] enable sharding on: test.foo_date with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:40:20 [conn] going to create 1 chunk(s) for: test.foo_date using new epoch 4fd979448a26dcf9048e3fbf
m30999| Thu Jun 14 01:40:20 [conn] ChunkManager: time to load chunks for test.foo_date: 0ms sequenceNumber: 12 version: 1|0||4fd979448a26dcf9048e3fbf based on: (empty)
m30999| Thu Jun 14 01:40:20 [conn] resetting shard version of test.foo_date on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:20 [conn4] build index test.foo_date { _id: 1 }
m30001| Thu Jun 14 01:40:20 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:20 [conn4] info: creating collection test.foo_date on add index
m30001| Thu Jun 14 01:40:20 [conn4] build index test.foo_date { a: 1.0 }
m30001| Thu Jun 14 01:40:20 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:20 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:20 [conn] splitting: test.foo_date shard: ns:test.foo_date at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey }
m30999| Thu Jun 14 01:40:20 [conn] ChunkManager: time to load chunks for test.foo_date: 0ms sequenceNumber: 13 version: 1|2||4fd979448a26dcf9048e3fbf based on: 1|0||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] request split points lookup for chunk test.foo_date { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:20 [conn4] received splitChunk request: { splitChunk: "test.foo_date", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: new Date(1000000) } ], shardId: "test.foo_date-a_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:20 [conn4] created new distributed lock for test.foo_date on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979447271f2fe6d09db21
m30001| Thu Jun 14 01:40:20 [conn4] splitChunk accepted at version 1|0||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420824), what: "split", ns: "test.foo_date", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: new Date(1000000) }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') }, right: { min: { a: new Date(1000000) }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') } } }
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:20 [conn] splitting: test.foo_date shard: ns:test.foo_date at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: new Date(1000000) } max: { a: MaxKey }
m30999| Thu Jun 14 01:40:20 [conn] ChunkManager: time to load chunks for test.foo_date: 0ms sequenceNumber: 14 version: 1|4||4fd979448a26dcf9048e3fbf based on: 1|2||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] request split points lookup for chunk test.foo_date { : new Date(1000000) } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:20 [conn4] received splitChunk request: { splitChunk: "test.foo_date", keyPattern: { a: 1.0 }, min: { a: new Date(1000000) }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: new Date(6000000) } ], shardId: "test.foo_date-a_new Date(1000000)", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:20 [conn4] created new distributed lock for test.foo_date on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979447271f2fe6d09db22
m30001| Thu Jun 14 01:40:20 [conn4] splitChunk accepted at version 1|2||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420828), what: "split", ns: "test.foo_date", details: { before: { min: { a: new Date(1000000) }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: new Date(1000000) }, max: { a: new Date(6000000) }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') }, right: { min: { a: new Date(6000000) }, max: { a: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') } } }
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:20 [conn] splitting: test.foo_date shard: ns:test.foo_date at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: new Date(1000000) } max: { a: new Date(6000000) }
m30999| Thu Jun 14 01:40:20 [conn] ChunkManager: time to load chunks for test.foo_date: 0ms sequenceNumber: 15 version: 1|6||4fd979448a26dcf9048e3fbf based on: 1|4||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] request split points lookup for chunk test.foo_date { : new Date(1000000) } -->> { : new Date(6000000) }
m30001| Thu Jun 14 01:40:20 [conn4] received splitChunk request: { splitChunk: "test.foo_date", keyPattern: { a: 1.0 }, min: { a: new Date(1000000) }, max: { a: new Date(6000000) }, from: "shard0001", splitKeys: [ { a: new Date(4000000) } ], shardId: "test.foo_date-a_new Date(1000000)", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:20 [conn4] created new distributed lock for test.foo_date on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979447271f2fe6d09db23
m30001| Thu Jun 14 01:40:20 [conn4] splitChunk accepted at version 1|4||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420831), what: "split", ns: "test.foo_date", details: { before: { min: { a: new Date(1000000) }, max: { a: new Date(6000000) }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') }, right: { min: { a: new Date(4000000) }, max: { a: new Date(6000000) }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979448a26dcf9048e3fbf') } } }
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:20 [conn] CMD: movechunk: { movechunk: "test.foo_date", find: { a: new Date(1000000) }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:20 [conn] moving chunk ns: test.foo_date moving ( ns:test.foo_date at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: new Date(1000000) } max: { a: new Date(4000000) }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:20 [conn4] received moveChunk request: { moveChunk: "test.foo_date", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, maxChunkSizeBytes: 52428800, shardId: "test.foo_date-a_new Date(1000000)", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:20 [conn4] created new distributed lock for test.foo_date on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:20 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979447271f2fe6d09db24
m30001| Thu Jun 14 01:40:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:20-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652420834), what: "moveChunk.start", ns: "test.foo_date", details: { min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk request accepted at version 1|6||4fd979448a26dcf9048e3fbf
m30001| Thu Jun 14 01:40:20 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:20 [migrateThread] build index test.foo_date { _id: 1 }
m30000| Thu Jun 14 01:40:20 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:20 [migrateThread] info: creating collection test.foo_date on add index
m30000| Thu Jun 14 01:40:20 [migrateThread] build index test.foo_date { a: 1.0 }
m30000| Thu Jun 14 01:40:20 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_date' { a: new Date(1000000) } -> { a: new Date(4000000) }
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_date", from: "localhost:30001", min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 99, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk setting version to: 2|0||4fd979448a26dcf9048e3fbf
m30000| Thu Jun 14 01:40:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_date' { a: new Date(1000000) } -> { a: new Date(4000000) }
m30000| Thu Jun 14 01:40:21 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652421845), what: "moveChunk.to", ns: "test.foo_date", details: { min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_date", from: "localhost:30001", min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 99, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk updating self version to: 2|1||4fd979448a26dcf9048e3fbf through { a: MinKey } -> { a: new Date(1000000) } for collection 'test.foo_date'
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421849), what: "moveChunk.commit", ns: "test.foo_date", details: { min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:21 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_date/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421850), what: "moveChunk.from", ns: "test.foo_date", details: { min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:21 [conn4] command admin.$cmd command: { moveChunk: "test.foo_date", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: new Date(1000000) }, max: { a: new Date(4000000) }, maxChunkSizeBytes: 52428800, shardId: "test.foo_date-a_new Date(1000000)", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:1554 w:830745 reslen:37 1016ms
m30999| Thu Jun 14 01:40:21 [conn] ChunkManager: time to load chunks for test.foo_date: 0ms sequenceNumber: 16 version: 2|1||4fd979448a26dcf9048e3fbf based on: 1|6||4fd979448a26dcf9048e3fbf
ShardingTest test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
m30000| Thu Jun 14 01:40:21 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:21 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:21 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:21 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:21 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_date",
"count" : 6,
"numExtents" : 2,
"size" : 216,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"a_1" : 16352
},
"avgObjSize" : 36,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_date",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_date",
"count" : 3,
"size" : 108,
"avgObjSize" : 36,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing string_id ####
m30999| Thu Jun 14 01:40:21 [conn] CMD: shardcollection: { shardcollection: "test.foo_string_id", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:40:21 [conn] enable sharding on: test.foo_string_id with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:40:21 [conn] going to create 1 chunk(s) for: test.foo_string_id using new epoch 4fd979458a26dcf9048e3fc0
m30001| Thu Jun 14 01:40:21 [conn4] build index test.foo_string_id { _id: 1 }
m30001| Thu Jun 14 01:40:21 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:21 [conn4] info: creating collection test.foo_string_id on add index
m30999| Thu Jun 14 01:40:21 [conn] ChunkManager: time to load chunks for test.foo_string_id: 0ms sequenceNumber: 17 version: 1|0||4fd979458a26dcf9048e3fc0 based on: (empty)
m30999| Thu Jun 14 01:40:21 [conn] resetting shard version of test.foo_string_id on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:21 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:21 [conn] splitting: test.foo_string_id shard: ns:test.foo_string_id at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:40:21 [conn4] request split points lookup for chunk test.foo_string_id { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:21 [conn4] received splitChunk request: { splitChunk: "test.foo_string_id", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: "allan" } ], shardId: "test.foo_string_id-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:21 [conn4] created new distributed lock for test.foo_string_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979457271f2fe6d09db25
m30001| Thu Jun 14 01:40:21 [conn4] splitChunk accepted at version 1|0||4fd979458a26dcf9048e3fc0
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421879), what: "split", ns: "test.foo_string_id", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') }, right: { min: { _id: "allan" }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') } } }
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:21 [conn] ChunkManager: time to load chunks for test.foo_string_id: 0ms sequenceNumber: 18 version: 1|2||4fd979458a26dcf9048e3fc0 based on: 1|0||4fd979458a26dcf9048e3fc0
m30999| Thu Jun 14 01:40:21 [conn] splitting: test.foo_string_id shard: ns:test.foo_string_id at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: "allan" } max: { _id: MaxKey }
m30001| Thu Jun 14 01:40:21 [conn4] request split points lookup for chunk test.foo_string_id { : "allan" } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:21 [conn4] received splitChunk request: { splitChunk: "test.foo_string_id", keyPattern: { _id: 1.0 }, min: { _id: "allan" }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: "sara" } ], shardId: "test.foo_string_id-_id_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:21 [conn4] created new distributed lock for test.foo_string_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979457271f2fe6d09db26
m30001| Thu Jun 14 01:40:21 [conn4] splitChunk accepted at version 1|2||4fd979458a26dcf9048e3fc0
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421883), what: "split", ns: "test.foo_string_id", details: { before: { min: { _id: "allan" }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: "allan" }, max: { _id: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') }, right: { min: { _id: "sara" }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') } } }
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:21 [conn] ChunkManager: time to load chunks for test.foo_string_id: 0ms sequenceNumber: 19 version: 1|4||4fd979458a26dcf9048e3fc0 based on: 1|2||4fd979458a26dcf9048e3fc0
m30999| Thu Jun 14 01:40:21 [conn] splitting: test.foo_string_id shard: ns:test.foo_string_id at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: "allan" } max: { _id: "sara" }
m30001| Thu Jun 14 01:40:21 [conn4] request split points lookup for chunk test.foo_string_id { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:40:21 [conn4] received splitChunk request: { splitChunk: "test.foo_string_id", keyPattern: { _id: 1.0 }, min: { _id: "allan" }, max: { _id: "sara" }, from: "shard0001", splitKeys: [ { _id: "joe" } ], shardId: "test.foo_string_id-_id_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:21 [conn4] created new distributed lock for test.foo_string_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979457271f2fe6d09db27
m30001| Thu Jun 14 01:40:21 [conn4] splitChunk accepted at version 1|4||4fd979458a26dcf9048e3fc0
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421886), what: "split", ns: "test.foo_string_id", details: { before: { min: { _id: "allan" }, max: { _id: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: "allan" }, max: { _id: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') }, right: { min: { _id: "joe" }, max: { _id: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979458a26dcf9048e3fc0') } } }
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:21 [conn] ChunkManager: time to load chunks for test.foo_string_id: 0ms sequenceNumber: 20 version: 1|6||4fd979458a26dcf9048e3fc0 based on: 1|4||4fd979458a26dcf9048e3fc0
m30999| Thu Jun 14 01:40:21 [conn] CMD: movechunk: { movechunk: "test.foo_string_id", find: { _id: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:21 [conn] moving chunk ns: test.foo_string_id moving ( ns:test.foo_string_id at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: "allan" } max: { _id: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:21 [conn4] received moveChunk request: { moveChunk: "test.foo_string_id", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: "allan" }, max: { _id: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_string_id-_id_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:21 [conn4] created new distributed lock for test.foo_string_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:21 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979457271f2fe6d09db28
m30001| Thu Jun 14 01:40:21 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:21-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652421889), what: "moveChunk.start", ns: "test.foo_string_id", details: { min: { _id: "allan" }, max: { _id: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk request accepted at version 1|6||4fd979458a26dcf9048e3fc0
m30001| Thu Jun 14 01:40:21 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:21 [migrateThread] build index test.foo_string_id { _id: 1 }
m30000| Thu Jun 14 01:40:21 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:21 [migrateThread] info: creating collection test.foo_string_id on add index
m30000| Thu Jun 14 01:40:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_string_id' { _id: "allan" } -> { _id: "joe" }
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_string_id", from: "localhost:30001", min: { _id: "allan" }, max: { _id: "joe" }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk setting version to: 2|0||4fd979458a26dcf9048e3fc0
m30000| Thu Jun 14 01:40:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_string_id' { _id: "allan" } -> { _id: "joe" }
m30000| Thu Jun 14 01:40:22 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652422901), what: "moveChunk.to", ns: "test.foo_string_id", details: { min: { _id: "allan" }, max: { _id: "joe" }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_string_id", from: "localhost:30001", min: { _id: "allan" }, max: { _id: "joe" }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk updating self version to: 2|1||4fd979458a26dcf9048e3fc0 through { _id: MinKey } -> { _id: "allan" } for collection 'test.foo_string_id'
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422905), what: "moveChunk.commit", ns: "test.foo_string_id", details: { min: { _id: "allan" }, max: { _id: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:22 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_string_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422906), what: "moveChunk.from", ns: "test.foo_string_id", details: { min: { _id: "allan" }, max: { _id: "joe" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:22 [conn4] command admin.$cmd command: { moveChunk: "test.foo_string_id", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: "allan" }, max: { _id: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_string_id-_id_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:2103 w:831500 reslen:37 1017ms
m30999| Thu Jun 14 01:40:22 [conn] ChunkManager: time to load chunks for test.foo_string_id: 0ms sequenceNumber: 21 version: 2|1||4fd979458a26dcf9048e3fc0 based on: 1|6||4fd979458a26dcf9048e3fc0
ShardingTest test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:22 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:22 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:22 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:22 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:22 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_string_id",
"count" : 6,
"numExtents" : 2,
"size" : 120,
"storageSize" : 16384,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 16352
},
"avgObjSize" : 20,
"nindexes" : 1,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_string_id",
"count" : 3,
"size" : 60,
"avgObjSize" : 20,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 1,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_string_id",
"count" : 3,
"size" : 60,
"avgObjSize" : 20,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 1,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing embedded 1 ####
m30999| Thu Jun 14 01:40:22 [conn] CMD: shardcollection: { shardcollection: "test.foo_embedded 1", key: { a.b: 1.0 } }
m30999| Thu Jun 14 01:40:22 [conn] enable sharding on: test.foo_embedded 1 with shard key: { a.b: 1.0 }
m30999| Thu Jun 14 01:40:22 [conn] going to create 1 chunk(s) for: test.foo_embedded 1 using new epoch 4fd979468a26dcf9048e3fc1
m30999| Thu Jun 14 01:40:22 [conn] ChunkManager: time to load chunks for test.foo_embedded 1: 0ms sequenceNumber: 22 version: 1|0||4fd979468a26dcf9048e3fc1 based on: (empty)
m30999| Thu Jun 14 01:40:22 [conn] resetting shard version of test.foo_embedded 1 on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:22 [conn4] build index test.foo_embedded 1 { _id: 1 }
m30001| Thu Jun 14 01:40:22 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:22 [conn4] info: creating collection test.foo_embedded 1 on add index
m30001| Thu Jun 14 01:40:22 [conn4] build index test.foo_embedded 1 { a.b: 1.0 }
m30001| Thu Jun 14 01:40:22 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:22 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:22 [conn] splitting: test.foo_embedded 1 shard: ns:test.foo_embedded 1 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a.b: MinKey } max: { a.b: MaxKey }
m30001| Thu Jun 14 01:40:22 [conn4] request split points lookup for chunk test.foo_embedded 1 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:22 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 1", keyPattern: { a.b: 1.0 }, min: { a.b: MinKey }, max: { a.b: MaxKey }, from: "shard0001", splitKeys: [ { a.b: "allan" } ], shardId: "test.foo_embedded 1-a.b_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:22 [conn4] created new distributed lock for test.foo_embedded 1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979467271f2fe6d09db29
m30001| Thu Jun 14 01:40:22 [conn4] splitChunk accepted at version 1|0||4fd979468a26dcf9048e3fc1
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422933), what: "split", ns: "test.foo_embedded 1", details: { before: { min: { a.b: MinKey }, max: { a.b: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b: MinKey }, max: { a.b: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') }, right: { min: { a.b: "allan" }, max: { a.b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') } } }
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:22 [conn] ChunkManager: time to load chunks for test.foo_embedded 1: 0ms sequenceNumber: 23 version: 1|2||4fd979468a26dcf9048e3fc1 based on: 1|0||4fd979468a26dcf9048e3fc1
m30999| Thu Jun 14 01:40:22 [conn] splitting: test.foo_embedded 1 shard: ns:test.foo_embedded 1 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a.b: "allan" } max: { a.b: MaxKey }
m30001| Thu Jun 14 01:40:22 [conn4] request split points lookup for chunk test.foo_embedded 1 { : "allan" } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:22 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 1", keyPattern: { a.b: 1.0 }, min: { a.b: "allan" }, max: { a.b: MaxKey }, from: "shard0001", splitKeys: [ { a.b: "sara" } ], shardId: "test.foo_embedded 1-a.b_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:22 [conn4] created new distributed lock for test.foo_embedded 1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979467271f2fe6d09db2a
m30001| Thu Jun 14 01:40:22 [conn4] splitChunk accepted at version 1|2||4fd979468a26dcf9048e3fc1
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422937), what: "split", ns: "test.foo_embedded 1", details: { before: { min: { a.b: "allan" }, max: { a.b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b: "allan" }, max: { a.b: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') }, right: { min: { a.b: "sara" }, max: { a.b: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') } } }
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:22 [conn] ChunkManager: time to load chunks for test.foo_embedded 1: 0ms sequenceNumber: 24 version: 1|4||4fd979468a26dcf9048e3fc1 based on: 1|2||4fd979468a26dcf9048e3fc1
m30999| Thu Jun 14 01:40:22 [conn] splitting: test.foo_embedded 1 shard: ns:test.foo_embedded 1 at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a.b: "allan" } max: { a.b: "sara" }
m30999| Thu Jun 14 01:40:22 [conn] ChunkManager: time to load chunks for test.foo_embedded 1: 0ms sequenceNumber: 25 version: 1|6||4fd979468a26dcf9048e3fc1 based on: 1|4||4fd979468a26dcf9048e3fc1
m30001| Thu Jun 14 01:40:22 [conn4] request split points lookup for chunk test.foo_embedded 1 { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:40:22 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 1", keyPattern: { a.b: 1.0 }, min: { a.b: "allan" }, max: { a.b: "sara" }, from: "shard0001", splitKeys: [ { a.b: "joe" } ], shardId: "test.foo_embedded 1-a.b_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:22 [conn4] created new distributed lock for test.foo_embedded 1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979467271f2fe6d09db2b
m30001| Thu Jun 14 01:40:22 [conn4] splitChunk accepted at version 1|4||4fd979468a26dcf9048e3fc1
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422940), what: "split", ns: "test.foo_embedded 1", details: { before: { min: { a.b: "allan" }, max: { a.b: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b: "allan" }, max: { a.b: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') }, right: { min: { a.b: "joe" }, max: { a.b: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979468a26dcf9048e3fc1') } } }
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:22 [conn] CMD: movechunk: { movechunk: "test.foo_embedded 1", find: { a.b: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:22 [conn] moving chunk ns: test.foo_embedded 1 moving ( ns:test.foo_embedded 1 at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a.b: "allan" } max: { a.b: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:22 [conn4] received moveChunk request: { moveChunk: "test.foo_embedded 1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a.b: "allan" }, max: { a.b: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_embedded 1-a.b_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:22 [conn4] created new distributed lock for test.foo_embedded 1 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:22 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979467271f2fe6d09db2c
m30001| Thu Jun 14 01:40:22 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:22-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652422943), what: "moveChunk.start", ns: "test.foo_embedded 1", details: { min: { a.b: "allan" }, max: { a.b: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk request accepted at version 1|6||4fd979468a26dcf9048e3fc1
m30001| Thu Jun 14 01:40:22 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:22 [migrateThread] build index test.foo_embedded 1 { _id: 1 }
m30000| Thu Jun 14 01:40:22 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:22 [migrateThread] info: creating collection test.foo_embedded 1 on add index
m30000| Thu Jun 14 01:40:22 [migrateThread] build index test.foo_embedded 1 { a.b: 1.0 }
m30000| Thu Jun 14 01:40:22 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_embedded 1' { a.b: "allan" } -> { a.b: "joe" }
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_embedded 1", from: "localhost:30001", min: { a.b: "allan" }, max: { a.b: "joe" }, shardKeyPattern: { a.b: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 127, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk setting version to: 2|0||4fd979468a26dcf9048e3fc1
m30000| Thu Jun 14 01:40:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_embedded 1' { a.b: "allan" } -> { a.b: "joe" }
m30000| Thu Jun 14 01:40:23 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652423953), what: "moveChunk.to", ns: "test.foo_embedded 1", details: { min: { a.b: "allan" }, max: { a.b: "joe" }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1007 } }
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_embedded 1", from: "localhost:30001", min: { a.b: "allan" }, max: { a.b: "joe" }, shardKeyPattern: { a.b: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 127, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk updating self version to: 2|1||4fd979468a26dcf9048e3fc1 through { a.b: MinKey } -> { a.b: "allan" } for collection 'test.foo_embedded 1'
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423957), what: "moveChunk.commit", ns: "test.foo_embedded 1", details: { min: { a.b: "allan" }, max: { a.b: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:23 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 1/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423958), what: "moveChunk.from", ns: "test.foo_embedded 1", details: { min: { a.b: "allan" }, max: { a.b: "joe" }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:23 [conn4] command admin.$cmd command: { moveChunk: "test.foo_embedded 1", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a.b: "allan" }, max: { a.b: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_embedded 1-a.b_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:2693 w:832574 reslen:37 1015ms
m30999| Thu Jun 14 01:40:23 [conn] ChunkManager: time to load chunks for test.foo_embedded 1: 0ms sequenceNumber: 26 version: 2|1||4fd979468a26dcf9048e3fc1 based on: 1|6||4fd979468a26dcf9048e3fc1
ShardingTest test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:23 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:23 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:23 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:23 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:23 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_embedded 1",
"count" : 6,
"numExtents" : 2,
"size" : 264,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"a.b_1" : 16352
},
"avgObjSize" : 44,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_embedded 1",
"count" : 3,
"size" : 132,
"avgObjSize" : 44,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a.b_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_embedded 1",
"count" : 3,
"size" : 132,
"avgObjSize" : 44,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a.b_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing embedded 2 ####
m30999| Thu Jun 14 01:40:23 [conn] CMD: shardcollection: { shardcollection: "test.foo_embedded 2", key: { a.b.c: 1.0 } }
m30999| Thu Jun 14 01:40:23 [conn] enable sharding on: test.foo_embedded 2 with shard key: { a.b.c: 1.0 }
m30999| Thu Jun 14 01:40:23 [conn] going to create 1 chunk(s) for: test.foo_embedded 2 using new epoch 4fd979478a26dcf9048e3fc2
m30999| Thu Jun 14 01:40:23 [conn] ChunkManager: time to load chunks for test.foo_embedded 2: 0ms sequenceNumber: 27 version: 1|0||4fd979478a26dcf9048e3fc2 based on: (empty)
m30999| Thu Jun 14 01:40:23 [conn] resetting shard version of test.foo_embedded 2 on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:23 [conn4] build index test.foo_embedded 2 { _id: 1 }
m30001| Thu Jun 14 01:40:23 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:23 [conn4] info: creating collection test.foo_embedded 2 on add index
m30001| Thu Jun 14 01:40:23 [conn4] build index test.foo_embedded 2 { a.b.c: 1.0 }
m30001| Thu Jun 14 01:40:23 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:23 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:23 [conn] splitting: test.foo_embedded 2 shard: ns:test.foo_embedded 2 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a.b.c: MinKey } max: { a.b.c: MaxKey }
m30001| Thu Jun 14 01:40:23 [conn4] request split points lookup for chunk test.foo_embedded 2 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:23 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 2", keyPattern: { a.b.c: 1.0 }, min: { a.b.c: MinKey }, max: { a.b.c: MaxKey }, from: "shard0001", splitKeys: [ { a.b.c: "allan" } ], shardId: "test.foo_embedded 2-a.b.c_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:23 [conn4] created new distributed lock for test.foo_embedded 2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979477271f2fe6d09db2d
m30001| Thu Jun 14 01:40:23 [conn4] splitChunk accepted at version 1|0||4fd979478a26dcf9048e3fc2
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423987), what: "split", ns: "test.foo_embedded 2", details: { before: { min: { a.b.c: MinKey }, max: { a.b.c: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b.c: MinKey }, max: { a.b.c: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') }, right: { min: { a.b.c: "allan" }, max: { a.b.c: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') } } }
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:23 [conn] ChunkManager: time to load chunks for test.foo_embedded 2: 0ms sequenceNumber: 28 version: 1|2||4fd979478a26dcf9048e3fc2 based on: 1|0||4fd979478a26dcf9048e3fc2
m30999| Thu Jun 14 01:40:23 [conn] splitting: test.foo_embedded 2 shard: ns:test.foo_embedded 2 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a.b.c: "allan" } max: { a.b.c: MaxKey }
m30001| Thu Jun 14 01:40:23 [conn4] request split points lookup for chunk test.foo_embedded 2 { : "allan" } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:23 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 2", keyPattern: { a.b.c: 1.0 }, min: { a.b.c: "allan" }, max: { a.b.c: MaxKey }, from: "shard0001", splitKeys: [ { a.b.c: "sara" } ], shardId: "test.foo_embedded 2-a.b.c_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:23 [conn4] created new distributed lock for test.foo_embedded 2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979477271f2fe6d09db2e
m30001| Thu Jun 14 01:40:23 [conn4] splitChunk accepted at version 1|2||4fd979478a26dcf9048e3fc2
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423991), what: "split", ns: "test.foo_embedded 2", details: { before: { min: { a.b.c: "allan" }, max: { a.b.c: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b.c: "allan" }, max: { a.b.c: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') }, right: { min: { a.b.c: "sara" }, max: { a.b.c: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') } } }
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:23 [conn] ChunkManager: time to load chunks for test.foo_embedded 2: 0ms sequenceNumber: 29 version: 1|4||4fd979478a26dcf9048e3fc2 based on: 1|2||4fd979478a26dcf9048e3fc2
m30999| Thu Jun 14 01:40:23 [conn] splitting: test.foo_embedded 2 shard: ns:test.foo_embedded 2 at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a.b.c: "allan" } max: { a.b.c: "sara" }
m30999| Thu Jun 14 01:40:23 [conn] ChunkManager: time to load chunks for test.foo_embedded 2: 0ms sequenceNumber: 30 version: 1|6||4fd979478a26dcf9048e3fc2 based on: 1|4||4fd979478a26dcf9048e3fc2
m30001| Thu Jun 14 01:40:23 [conn4] request split points lookup for chunk test.foo_embedded 2 { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:40:23 [conn4] received splitChunk request: { splitChunk: "test.foo_embedded 2", keyPattern: { a.b.c: 1.0 }, min: { a.b.c: "allan" }, max: { a.b.c: "sara" }, from: "shard0001", splitKeys: [ { a.b.c: "joe" } ], shardId: "test.foo_embedded 2-a.b.c_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:23 [conn4] created new distributed lock for test.foo_embedded 2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979477271f2fe6d09db2f
m30001| Thu Jun 14 01:40:23 [conn4] splitChunk accepted at version 1|4||4fd979478a26dcf9048e3fc2
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423995), what: "split", ns: "test.foo_embedded 2", details: { before: { min: { a.b.c: "allan" }, max: { a.b.c: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') }, right: { min: { a.b.c: "joe" }, max: { a.b.c: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979478a26dcf9048e3fc2') } } }
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:23 [conn] CMD: movechunk: { movechunk: "test.foo_embedded 2", find: { a.b.c: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:23 [conn] moving chunk ns: test.foo_embedded 2 moving ( ns:test.foo_embedded 2 at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a.b.c: "allan" } max: { a.b.c: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:23 [conn4] received moveChunk request: { moveChunk: "test.foo_embedded 2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_embedded 2-a.b.c_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:23 [conn4] created new distributed lock for test.foo_embedded 2 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:23 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979477271f2fe6d09db30
m30001| Thu Jun 14 01:40:23 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:23-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652423997), what: "moveChunk.start", ns: "test.foo_embedded 2", details: { min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk request accepted at version 1|6||4fd979478a26dcf9048e3fc2
m30001| Thu Jun 14 01:40:23 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:23 [migrateThread] build index test.foo_embedded 2 { _id: 1 }
m30000| Thu Jun 14 01:40:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:23 [migrateThread] info: creating collection test.foo_embedded 2 on add index
m30000| Thu Jun 14 01:40:23 [migrateThread] build index test.foo_embedded 2 { a.b.c: 1.0 }
m30000| Thu Jun 14 01:40:23 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_embedded 2' { a.b.c: "allan" } -> { a.b.c: "joe" }
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_embedded 2", from: "localhost:30001", min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, shardKeyPattern: { a.b.c: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 151, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk setting version to: 2|0||4fd979478a26dcf9048e3fc2
m30000| Thu Jun 14 01:40:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_embedded 2' { a.b.c: "allan" } -> { a.b.c: "joe" }
m30000| Thu Jun 14 01:40:25 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652425009), what: "moveChunk.to", ns: "test.foo_embedded 2", details: { min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_embedded 2", from: "localhost:30001", min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, shardKeyPattern: { a.b.c: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 151, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk updating self version to: 2|1||4fd979478a26dcf9048e3fc2 through { a.b.c: MinKey } -> { a.b.c: "allan" } for collection 'test.foo_embedded 2'
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425013), what: "moveChunk.commit", ns: "test.foo_embedded 2", details: { min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:25 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_embedded 2/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425014), what: "moveChunk.from", ns: "test.foo_embedded 2", details: { min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:25 [conn4] command admin.$cmd command: { moveChunk: "test.foo_embedded 2", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a.b.c: "allan" }, max: { a.b.c: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo_embedded 2-a.b.c_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:3230 w:833670 reslen:37 1017ms
m30999| Thu Jun 14 01:40:25 [conn] ChunkManager: time to load chunks for test.foo_embedded 2: 0ms sequenceNumber: 31 version: 2|1||4fd979478a26dcf9048e3fc2 based on: 1|6||4fd979478a26dcf9048e3fc2
ShardingTest test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_embedded 2-a.b.c_MinKey 2000|1 { "a.b.c" : { $minKey : 1 } } -> { "a.b.c" : "allan" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"allan" 2000|0 { "a.b.c" : "allan" } -> { "a.b.c" : "joe" } shard0000 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"joe" 1000|6 { "a.b.c" : "joe" } -> { "a.b.c" : "sara" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"sara" 1000|4 { "a.b.c" : "sara" } -> { "a.b.c" : { $maxKey : 1 } } shard0001 test.foo_embedded 2
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:25 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:25 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:25 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:25 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:25 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_embedded 2",
"count" : 6,
"numExtents" : 2,
"size" : 312,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"a.b.c_1" : 16352
},
"avgObjSize" : 52,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_embedded 2",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a.b.c_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_embedded 2",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"a.b.c_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing object ####
m30999| Thu Jun 14 01:40:25 [conn] CMD: shardcollection: { shardcollection: "test.foo_object", key: { o: 1.0 } }
m30999| Thu Jun 14 01:40:25 [conn] enable sharding on: test.foo_object with shard key: { o: 1.0 }
m30999| Thu Jun 14 01:40:25 [conn] going to create 1 chunk(s) for: test.foo_object using new epoch 4fd979498a26dcf9048e3fc3
m30999| Thu Jun 14 01:40:25 [conn] ChunkManager: time to load chunks for test.foo_object: 0ms sequenceNumber: 32 version: 1|0||4fd979498a26dcf9048e3fc3 based on: (empty)
m30999| Thu Jun 14 01:40:25 [conn] resetting shard version of test.foo_object on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:25 [conn4] build index test.foo_object { _id: 1 }
m30001| Thu Jun 14 01:40:25 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:25 [conn4] info: creating collection test.foo_object on add index
m30001| Thu Jun 14 01:40:25 [conn4] build index test.foo_object { o: 1.0 }
m30001| Thu Jun 14 01:40:25 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:25 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:25 [conn] splitting: test.foo_object shard: ns:test.foo_object at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { o: MinKey } max: { o: MaxKey }
m30001| Thu Jun 14 01:40:25 [conn4] request split points lookup for chunk test.foo_object { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:25 [conn4] received splitChunk request: { splitChunk: "test.foo_object", keyPattern: { o: 1.0 }, min: { o: MinKey }, max: { o: MaxKey }, from: "shard0001", splitKeys: [ { o: { a: 1.0, b: 1.2 } } ], shardId: "test.foo_object-o_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:25 [conn4] created new distributed lock for test.foo_object on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979497271f2fe6d09db31
m30001| Thu Jun 14 01:40:25 [conn4] splitChunk accepted at version 1|0||4fd979498a26dcf9048e3fc3
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425043), what: "split", ns: "test.foo_object", details: { before: { min: { o: MinKey }, max: { o: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: MinKey }, max: { o: { a: 1.0, b: 1.2 } }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') }, right: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') } } }
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:25 [conn] ChunkManager: time to load chunks for test.foo_object: 0ms sequenceNumber: 33 version: 1|2||4fd979498a26dcf9048e3fc3 based on: 1|0||4fd979498a26dcf9048e3fc3
m30999| Thu Jun 14 01:40:25 [conn] splitting: test.foo_object shard: ns:test.foo_object at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { o: { a: 1.0, b: 1.2 } } max: { o: MaxKey }
m30001| Thu Jun 14 01:40:25 [conn4] request split points lookup for chunk test.foo_object { : { a: 1.0, b: 1.2 } } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:25 [conn4] received splitChunk request: { splitChunk: "test.foo_object", keyPattern: { o: 1.0 }, min: { o: { a: 1.0, b: 1.2 } }, max: { o: MaxKey }, from: "shard0001", splitKeys: [ { o: { a: 2.0, b: 4.5 } } ], shardId: "test.foo_object-o_{ a: 1.0, b: 1.2 }", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:25 [conn4] created new distributed lock for test.foo_object on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979497271f2fe6d09db32
m30001| Thu Jun 14 01:40:25 [conn4] splitChunk accepted at version 1|2||4fd979498a26dcf9048e3fc3
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425047), what: "split", ns: "test.foo_object", details: { before: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 4.5 } }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') }, right: { min: { o: { a: 2.0, b: 4.5 } }, max: { o: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') } } }
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:25 [conn] ChunkManager: time to load chunks for test.foo_object: 0ms sequenceNumber: 34 version: 1|4||4fd979498a26dcf9048e3fc3 based on: 1|2||4fd979498a26dcf9048e3fc3
m30999| Thu Jun 14 01:40:25 [conn] splitting: test.foo_object shard: ns:test.foo_object at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { o: { a: 1.0, b: 1.2 } } max: { o: { a: 2.0, b: 4.5 } }
m30999| Thu Jun 14 01:40:25 [conn] ChunkManager: time to load chunks for test.foo_object: 0ms sequenceNumber: 35 version: 1|6||4fd979498a26dcf9048e3fc3 based on: 1|4||4fd979498a26dcf9048e3fc3
m30001| Thu Jun 14 01:40:25 [conn4] request split points lookup for chunk test.foo_object { : { a: 1.0, b: 1.2 } } -->> { : { a: 2.0, b: 4.5 } }
m30001| Thu Jun 14 01:40:25 [conn4] received splitChunk request: { splitChunk: "test.foo_object", keyPattern: { o: 1.0 }, min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 4.5 } }, from: "shard0001", splitKeys: [ { o: { a: 2.0, b: 1.2 } } ], shardId: "test.foo_object-o_{ a: 1.0, b: 1.2 }", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:25 [conn4] created new distributed lock for test.foo_object on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979497271f2fe6d09db33
m30001| Thu Jun 14 01:40:25 [conn4] splitChunk accepted at version 1|4||4fd979498a26dcf9048e3fc3
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425107), what: "split", ns: "test.foo_object", details: { before: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 4.5 } }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') }, right: { min: { o: { a: 2.0, b: 1.2 } }, max: { o: { a: 2.0, b: 4.5 } }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979498a26dcf9048e3fc3') } } }
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:25 [conn] CMD: movechunk: { movechunk: "test.foo_object", find: { o: { a: 1.0, b: 1.2 } }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:25 [conn] moving chunk ns: test.foo_object moving ( ns:test.foo_object at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { o: { a: 1.0, b: 1.2 } } max: { o: { a: 2.0, b: 1.2 } }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:25 [conn4] received moveChunk request: { moveChunk: "test.foo_object", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, maxChunkSizeBytes: 52428800, shardId: "test.foo_object-o_{ a: 1.0, b: 1.2 }", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:25 [conn4] created new distributed lock for test.foo_object on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:25 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd979497271f2fe6d09db34
m30001| Thu Jun 14 01:40:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:25-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652425110), what: "moveChunk.start", ns: "test.foo_object", details: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk request accepted at version 1|6||4fd979498a26dcf9048e3fc3
m30001| Thu Jun 14 01:40:25 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:25 [migrateThread] build index test.foo_object { _id: 1 }
m30000| Thu Jun 14 01:40:25 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:25 [migrateThread] info: creating collection test.foo_object on add index
m30000| Thu Jun 14 01:40:25 [migrateThread] build index test.foo_object { o: 1.0 }
m30000| Thu Jun 14 01:40:25 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_object' { o: { a: 1.0, b: 1.2 } } -> { o: { a: 2.0, b: 1.2 } }
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_object", from: "localhost:30001", min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, shardKeyPattern: { o: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 156, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk setting version to: 2|0||4fd979498a26dcf9048e3fc3
m30000| Thu Jun 14 01:40:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_object' { o: { a: 1.0, b: 1.2 } } -> { o: { a: 2.0, b: 1.2 } }
m30000| Thu Jun 14 01:40:26 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652426121), what: "moveChunk.to", ns: "test.foo_object", details: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1009 } }
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_object", from: "localhost:30001", min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, shardKeyPattern: { o: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 156, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk updating self version to: 2|1||4fd979498a26dcf9048e3fc3 through { o: MinKey } -> { o: { a: 1.0, b: 1.2 } } for collection 'test.foo_object'
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426125), what: "moveChunk.commit", ns: "test.foo_object", details: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:26 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_object/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426126), what: "moveChunk.from", ns: "test.foo_object", details: { min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:26 [conn4] command admin.$cmd command: { moveChunk: "test.foo_object", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o: { a: 1.0, b: 1.2 } }, max: { o: { a: 2.0, b: 1.2 } }, maxChunkSizeBytes: 52428800, shardId: "test.foo_object-o_{ a: 1.0, b: 1.2 }", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:3811 w:834721 reslen:37 1017ms
m30999| Thu Jun 14 01:40:26 [conn] ChunkManager: time to load chunks for test.foo_object: 0ms sequenceNumber: 36 version: 2|1||4fd979498a26dcf9048e3fc3 based on: 1|6||4fd979498a26dcf9048e3fc3
ShardingTest test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_embedded 2-a.b.c_MinKey 2000|1 { "a.b.c" : { $minKey : 1 } } -> { "a.b.c" : "allan" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"allan" 2000|0 { "a.b.c" : "allan" } -> { "a.b.c" : "joe" } shard0000 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"joe" 1000|6 { "a.b.c" : "joe" } -> { "a.b.c" : "sara" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"sara" 1000|4 { "a.b.c" : "sara" } -> { "a.b.c" : { $maxKey : 1 } } shard0001 test.foo_embedded 2
test.foo_object-o_MinKey 2000|1 { "o" : { $minKey : 1 } } -> { "o" : { "a" : 1, "b" : 1.2 } } shard0001 test.foo_object
test.foo_object-o_{ a: 1.0, b: 1.2 } 2000|0 { "o" : { "a" : 1, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 1.2 } } shard0000 test.foo_object
test.foo_object-o_{ a: 2.0, b: 1.2 } 1000|6 { "o" : { "a" : 2, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 4.5 } } shard0001 test.foo_object
test.foo_object-o_{ a: 2.0, b: 4.5 } 1000|4 { "o" : { "a" : 2, "b" : 4.5 } } -> { "o" : { $maxKey : 1 } } shard0001 test.foo_object
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:26 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:26 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:26 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:26 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:26 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_object",
"count" : 6,
"numExtents" : 2,
"size" : 312,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"o_1" : 16352
},
"avgObjSize" : 52,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_object",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_object",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing compound ####
m30999| Thu Jun 14 01:40:26 [conn] CMD: shardcollection: { shardcollection: "test.foo_compound", key: { o.a: 1.0, o.b: 1.0 } }
m30999| Thu Jun 14 01:40:26 [conn] enable sharding on: test.foo_compound with shard key: { o.a: 1.0, o.b: 1.0 }
m30999| Thu Jun 14 01:40:26 [conn] going to create 1 chunk(s) for: test.foo_compound using new epoch 4fd9794a8a26dcf9048e3fc4
m30999| Thu Jun 14 01:40:26 [conn] ChunkManager: time to load chunks for test.foo_compound: 0ms sequenceNumber: 37 version: 1|0||4fd9794a8a26dcf9048e3fc4 based on: (empty)
m30999| Thu Jun 14 01:40:26 [conn] resetting shard version of test.foo_compound on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:26 [conn4] build index test.foo_compound { _id: 1 }
m30001| Thu Jun 14 01:40:26 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:26 [conn4] info: creating collection test.foo_compound on add index
m30001| Thu Jun 14 01:40:26 [conn4] build index test.foo_compound { o.a: 1.0, o.b: 1.0 }
m30001| Thu Jun 14 01:40:26 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:26 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:26 [conn] splitting: test.foo_compound shard: ns:test.foo_compound at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { o.a: MinKey, o.b: MinKey } max: { o.a: MaxKey, o.b: MaxKey }
m30001| Thu Jun 14 01:40:26 [conn4] request split points lookup for chunk test.foo_compound { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:40:26 [conn4] received splitChunk request: { splitChunk: "test.foo_compound", keyPattern: { o.a: 1.0, o.b: 1.0 }, min: { o.a: MinKey, o.b: MinKey }, max: { o.a: MaxKey, o.b: MaxKey }, from: "shard0001", splitKeys: [ { o.a: 1.0, o.b: 1.2 } ], shardId: "test.foo_compound-o.a_MinKeyo.b_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:26 [conn4] created new distributed lock for test.foo_compound on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794a7271f2fe6d09db35
m30001| Thu Jun 14 01:40:26 [conn4] splitChunk accepted at version 1|0||4fd9794a8a26dcf9048e3fc4
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426157), what: "split", ns: "test.foo_compound", details: { before: { min: { o.a: MinKey, o.b: MinKey }, max: { o.a: MaxKey, o.b: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o.a: MinKey, o.b: MinKey }, max: { o.a: 1.0, o.b: 1.2 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') }, right: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: MaxKey, o.b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') } } }
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:26 [conn] ChunkManager: time to load chunks for test.foo_compound: 0ms sequenceNumber: 38 version: 1|2||4fd9794a8a26dcf9048e3fc4 based on: 1|0||4fd9794a8a26dcf9048e3fc4
m30999| Thu Jun 14 01:40:26 [conn] splitting: test.foo_compound shard: ns:test.foo_compound at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { o.a: 1.0, o.b: 1.2 } max: { o.a: MaxKey, o.b: MaxKey }
m30001| Thu Jun 14 01:40:26 [conn4] request split points lookup for chunk test.foo_compound { : 1.0, : 1.2 } -->> { : MaxKey, : MaxKey }
m30001| Thu Jun 14 01:40:26 [conn4] received splitChunk request: { splitChunk: "test.foo_compound", keyPattern: { o.a: 1.0, o.b: 1.0 }, min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: MaxKey, o.b: MaxKey }, from: "shard0001", splitKeys: [ { o.a: 2.0, o.b: 4.5 } ], shardId: "test.foo_compound-o.a_1.0o.b_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:26 [conn4] created new distributed lock for test.foo_compound on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794a7271f2fe6d09db36
m30001| Thu Jun 14 01:40:26 [conn4] splitChunk accepted at version 1|2||4fd9794a8a26dcf9048e3fc4
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426160), what: "split", ns: "test.foo_compound", details: { before: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: MaxKey, o.b: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 4.5 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') }, right: { min: { o.a: 2.0, o.b: 4.5 }, max: { o.a: MaxKey, o.b: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') } } }
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:26 [conn] ChunkManager: time to load chunks for test.foo_compound: 0ms sequenceNumber: 39 version: 1|4||4fd9794a8a26dcf9048e3fc4 based on: 1|2||4fd9794a8a26dcf9048e3fc4
m30999| Thu Jun 14 01:40:26 [conn] splitting: test.foo_compound shard: ns:test.foo_compound at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { o.a: 1.0, o.b: 1.2 } max: { o.a: 2.0, o.b: 4.5 }
m30999| Thu Jun 14 01:40:26 [conn] ChunkManager: time to load chunks for test.foo_compound: 0ms sequenceNumber: 40 version: 1|6||4fd9794a8a26dcf9048e3fc4 based on: 1|4||4fd9794a8a26dcf9048e3fc4
m30001| Thu Jun 14 01:40:26 [conn4] request split points lookup for chunk test.foo_compound { : 1.0, : 1.2 } -->> { : 2.0, : 4.5 }
m30001| Thu Jun 14 01:40:26 [conn4] received splitChunk request: { splitChunk: "test.foo_compound", keyPattern: { o.a: 1.0, o.b: 1.0 }, min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 4.5 }, from: "shard0001", splitKeys: [ { o.a: 2.0, o.b: 1.2 } ], shardId: "test.foo_compound-o.a_1.0o.b_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:26 [conn4] created new distributed lock for test.foo_compound on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794a7271f2fe6d09db37
m30001| Thu Jun 14 01:40:26 [conn4] splitChunk accepted at version 1|4||4fd9794a8a26dcf9048e3fc4
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426164), what: "split", ns: "test.foo_compound", details: { before: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 4.5 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') }, right: { min: { o.a: 2.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 4.5 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd9794a8a26dcf9048e3fc4') } } }
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:26 [conn] CMD: movechunk: { movechunk: "test.foo_compound", find: { o.a: 1.0, o.b: 1.2 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:26 [conn] moving chunk ns: test.foo_compound moving ( ns:test.foo_compound at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { o.a: 1.0, o.b: 1.2 } max: { o.a: 2.0, o.b: 1.2 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:26 [conn4] received moveChunk request: { moveChunk: "test.foo_compound", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, maxChunkSizeBytes: 52428800, shardId: "test.foo_compound-o.a_1.0o.b_1.2", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:26 [conn4] created new distributed lock for test.foo_compound on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:26 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794a7271f2fe6d09db38
m30001| Thu Jun 14 01:40:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:26-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652426166), what: "moveChunk.start", ns: "test.foo_compound", details: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk request accepted at version 1|6||4fd9794a8a26dcf9048e3fc4
m30001| Thu Jun 14 01:40:26 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:26 [migrateThread] build index test.foo_compound { _id: 1 }
m30000| Thu Jun 14 01:40:26 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:26 [migrateThread] info: creating collection test.foo_compound on add index
m30000| Thu Jun 14 01:40:26 [migrateThread] build index test.foo_compound { o.a: 1.0, o.b: 1.0 }
m30000| Thu Jun 14 01:40:26 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_compound' { o.a: 1.0, o.b: 1.2 } -> { o.a: 2.0, o.b: 1.2 }
m30000| Thu Jun 14 01:40:26 [initandlisten] connection accepted from 127.0.0.1:56710 #11 (11 connections now open)
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_compound", from: "localhost:30001", min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, shardKeyPattern: { o.a: 1, o.b: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 156, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk setting version to: 2|0||4fd9794a8a26dcf9048e3fc4
m30000| Thu Jun 14 01:40:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_compound' { o.a: 1.0, o.b: 1.2 } -> { o.a: 2.0, o.b: 1.2 }
m30000| Thu Jun 14 01:40:27 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-7", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652427177), what: "moveChunk.to", ns: "test.foo_compound", details: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_compound", from: "localhost:30001", min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, shardKeyPattern: { o.a: 1, o.b: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 156, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk updating self version to: 2|1||4fd9794a8a26dcf9048e3fc4 through { o.a: MinKey, o.b: MinKey } -> { o.a: 1.0, o.b: 1.2 } for collection 'test.foo_compound'
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427181), what: "moveChunk.commit", ns: "test.foo_compound", details: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:27 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_compound/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427182), what: "moveChunk.from", ns: "test.foo_compound", details: { min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:27 [conn4] command admin.$cmd command: { moveChunk: "test.foo_compound", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o.a: 1.0, o.b: 1.2 }, max: { o.a: 2.0, o.b: 1.2 }, maxChunkSizeBytes: 52428800, shardId: "test.foo_compound-o.a_1.0o.b_1.2", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:4364 w:835829 reslen:37 1016ms
m30999| Thu Jun 14 01:40:27 [conn] ChunkManager: time to load chunks for test.foo_compound: 0ms sequenceNumber: 41 version: 2|1||4fd9794a8a26dcf9048e3fc4 based on: 1|6||4fd9794a8a26dcf9048e3fc4
ShardingTest test.foo_compound-o.a_MinKeyo.b_MinKey 2000|1 { "o.a" : { $minKey : 1 }, "o.b" : { $minKey : 1 } } -> { "o.a" : 1, "o.b" : 1.2 } shard0001 test.foo_compound
test.foo_compound-o.a_1.0o.b_1.2 2000|0 { "o.a" : 1, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 1.2 } shard0000 test.foo_compound
test.foo_compound-o.a_2.0o.b_1.2 1000|6 { "o.a" : 2, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 4.5 } shard0001 test.foo_compound
test.foo_compound-o.a_2.0o.b_4.5 1000|4 { "o.a" : 2, "o.b" : 4.5 } -> { "o.a" : { $maxKey : 1 }, "o.b" : { $maxKey : 1 } } shard0001 test.foo_compound
test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_embedded 2-a.b.c_MinKey 2000|1 { "a.b.c" : { $minKey : 1 } } -> { "a.b.c" : "allan" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"allan" 2000|0 { "a.b.c" : "allan" } -> { "a.b.c" : "joe" } shard0000 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"joe" 1000|6 { "a.b.c" : "joe" } -> { "a.b.c" : "sara" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"sara" 1000|4 { "a.b.c" : "sara" } -> { "a.b.c" : { $maxKey : 1 } } shard0001 test.foo_embedded 2
test.foo_object-o_MinKey 2000|1 { "o" : { $minKey : 1 } } -> { "o" : { "a" : 1, "b" : 1.2 } } shard0001 test.foo_object
test.foo_object-o_{ a: 1.0, b: 1.2 } 2000|0 { "o" : { "a" : 1, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 1.2 } } shard0000 test.foo_object
test.foo_object-o_{ a: 2.0, b: 1.2 } 1000|6 { "o" : { "a" : 2, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 4.5 } } shard0001 test.foo_object
test.foo_object-o_{ a: 2.0, b: 4.5 } 1000|4 { "o" : { "a" : 2, "b" : 4.5 } } -> { "o" : { $maxKey : 1 } } shard0001 test.foo_object
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:27 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:27 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:27 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:27 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:27 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_compound",
"count" : 6,
"numExtents" : 2,
"size" : 312,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"o.a_1_o.b_1" : 16352
},
"avgObjSize" : 52,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_compound",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o.a_1_o.b_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_compound",
"count" : 3,
"size" : 156,
"avgObjSize" : 52,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o.a_1_o.b_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing oid_id ####
m30999| Thu Jun 14 01:40:27 [conn] CMD: shardcollection: { shardcollection: "test.foo_oid_id", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:40:27 [conn] enable sharding on: test.foo_oid_id with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:40:27 [conn] going to create 1 chunk(s) for: test.foo_oid_id using new epoch 4fd9794b8a26dcf9048e3fc5
m30999| Thu Jun 14 01:40:27 [conn] ChunkManager: time to load chunks for test.foo_oid_id: 0ms sequenceNumber: 42 version: 1|0||4fd9794b8a26dcf9048e3fc5 based on: (empty)
m30999| Thu Jun 14 01:40:27 [conn] resetting shard version of test.foo_oid_id on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:27 [conn4] build index test.foo_oid_id { _id: 1 }
m30001| Thu Jun 14 01:40:27 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:27 [conn4] info: creating collection test.foo_oid_id on add index
m30001| Thu Jun 14 01:40:27 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:27 [conn] splitting: test.foo_oid_id shard: ns:test.foo_oid_id at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30001| Thu Jun 14 01:40:27 [conn4] request split points lookup for chunk test.foo_oid_id { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:27 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_id", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9793f6f3da92d7ddef003') } ], shardId: "test.foo_oid_id-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:27 [conn4] created new distributed lock for test.foo_oid_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794b7271f2fe6d09db39
m30001| Thu Jun 14 01:40:27 [conn4] splitChunk accepted at version 1|0||4fd9794b8a26dcf9048e3fc5
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427213), what: "split", ns: "test.foo_oid_id", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') }, right: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') } } }
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:27 [conn] ChunkManager: time to load chunks for test.foo_oid_id: 0ms sequenceNumber: 43 version: 1|2||4fd9794b8a26dcf9048e3fc5 based on: 1|0||4fd9794b8a26dcf9048e3fc5
m30999| Thu Jun 14 01:40:27 [conn] splitting: test.foo_oid_id shard: ns:test.foo_oid_id at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') } max: { _id: MaxKey }
m30001| Thu Jun 14 01:40:27 [conn4] request split points lookup for chunk test.foo_oid_id { : ObjectId('4fd9793f6f3da92d7ddef003') } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:27 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_id", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9793f6f3da92d7ddef008') } ], shardId: "test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:27 [conn4] created new distributed lock for test.foo_oid_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794b7271f2fe6d09db3a
m30001| Thu Jun 14 01:40:27 [conn4] splitChunk accepted at version 1|2||4fd9794b8a26dcf9048e3fc5
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427217), what: "split", ns: "test.foo_oid_id", details: { before: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') }, right: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') } } }
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:27 [conn] ChunkManager: time to load chunks for test.foo_oid_id: 0ms sequenceNumber: 44 version: 1|4||4fd9794b8a26dcf9048e3fc5 based on: 1|2||4fd9794b8a26dcf9048e3fc5
m30999| Thu Jun 14 01:40:27 [conn] splitting: test.foo_oid_id shard: ns:test.foo_oid_id at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') } max: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }
m30999| Thu Jun 14 01:40:27 [conn] ChunkManager: time to load chunks for test.foo_oid_id: 0ms sequenceNumber: 45 version: 1|6||4fd9794b8a26dcf9048e3fc5 based on: 1|4||4fd9794b8a26dcf9048e3fc5
m30001| Thu Jun 14 01:40:27 [conn4] request split points lookup for chunk test.foo_oid_id { : ObjectId('4fd9793f6f3da92d7ddef003') } -->> { : ObjectId('4fd9793f6f3da92d7ddef008') }
m30001| Thu Jun 14 01:40:27 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_id", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9793f6f3da92d7ddef006') } ], shardId: "test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:27 [conn4] created new distributed lock for test.foo_oid_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794b7271f2fe6d09db3b
m30001| Thu Jun 14 01:40:27 [conn4] splitChunk accepted at version 1|4||4fd9794b8a26dcf9048e3fc5
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427221), what: "split", ns: "test.foo_oid_id", details: { before: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') }, right: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef008') }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd9794b8a26dcf9048e3fc5') } } }
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:27 [conn] CMD: movechunk: { movechunk: "test.foo_oid_id", find: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:27 [conn] moving chunk ns: test.foo_oid_id moving ( ns:test.foo_oid_id at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') } max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:27 [conn4] received moveChunk request: { moveChunk: "test.foo_oid_id", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, maxChunkSizeBytes: 52428800, shardId: "test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:27 [conn4] created new distributed lock for test.foo_oid_id on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:27 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794b7271f2fe6d09db3c
m30001| Thu Jun 14 01:40:27 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:27-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652427223), what: "moveChunk.start", ns: "test.foo_oid_id", details: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk request accepted at version 1|6||4fd9794b8a26dcf9048e3fc5
m30001| Thu Jun 14 01:40:27 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:27 [migrateThread] build index test.foo_oid_id { _id: 1 }
m30000| Thu Jun 14 01:40:27 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:27 [migrateThread] info: creating collection test.foo_oid_id on add index
m30000| Thu Jun 14 01:40:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_oid_id' { _id: ObjectId('4fd9793f6f3da92d7ddef003') } -> { _id: ObjectId('4fd9793f6f3da92d7ddef006') }
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_oid_id", from: "localhost:30001", min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk setting version to: 2|0||4fd9794b8a26dcf9048e3fc5
m30000| Thu Jun 14 01:40:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_oid_id' { _id: ObjectId('4fd9793f6f3da92d7ddef003') } -> { _id: ObjectId('4fd9793f6f3da92d7ddef006') }
m30000| Thu Jun 14 01:40:28 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652428233), what: "moveChunk.to", ns: "test.foo_oid_id", details: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1007 } }
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_oid_id", from: "localhost:30001", min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 66, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk updating self version to: 2|1||4fd9794b8a26dcf9048e3fc5 through { _id: MinKey } -> { _id: ObjectId('4fd9793f6f3da92d7ddef003') } for collection 'test.foo_oid_id'
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428237), what: "moveChunk.commit", ns: "test.foo_oid_id", details: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:28 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_id/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428238), what: "moveChunk.from", ns: "test.foo_oid_id", details: { min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, step1 of 6: 0, step2 of 6: 0, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:28 [conn4] command admin.$cmd command: { moveChunk: "test.foo_oid_id", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd9793f6f3da92d7ddef003') }, max: { _id: ObjectId('4fd9793f6f3da92d7ddef006') }, maxChunkSizeBytes: 52428800, shardId: "test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003')", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:4906 w:836593 reslen:37 1015ms
m30999| Thu Jun 14 01:40:28 [conn] ChunkManager: time to load chunks for test.foo_oid_id: 0ms sequenceNumber: 46 version: 2|1||4fd9794b8a26dcf9048e3fc5 based on: 1|6||4fd9794b8a26dcf9048e3fc5
ShardingTest test.foo_compound-o.a_MinKeyo.b_MinKey 2000|1 { "o.a" : { $minKey : 1 }, "o.b" : { $minKey : 1 } } -> { "o.a" : 1, "o.b" : 1.2 } shard0001 test.foo_compound
test.foo_compound-o.a_1.0o.b_1.2 2000|0 { "o.a" : 1, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 1.2 } shard0000 test.foo_compound
test.foo_compound-o.a_2.0o.b_1.2 1000|6 { "o.a" : 2, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 4.5 } shard0001 test.foo_compound
test.foo_compound-o.a_2.0o.b_4.5 1000|4 { "o.a" : 2, "o.b" : 4.5 } -> { "o.a" : { $maxKey : 1 }, "o.b" : { $maxKey : 1 } } shard0001 test.foo_compound
test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_embedded 2-a.b.c_MinKey 2000|1 { "a.b.c" : { $minKey : 1 } } -> { "a.b.c" : "allan" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"allan" 2000|0 { "a.b.c" : "allan" } -> { "a.b.c" : "joe" } shard0000 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"joe" 1000|6 { "a.b.c" : "joe" } -> { "a.b.c" : "sara" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"sara" 1000|4 { "a.b.c" : "sara" } -> { "a.b.c" : { $maxKey : 1 } } shard0001 test.foo_embedded 2
test.foo_object-o_MinKey 2000|1 { "o" : { $minKey : 1 } } -> { "o" : { "a" : 1, "b" : 1.2 } } shard0001 test.foo_object
test.foo_object-o_{ a: 1.0, b: 1.2 } 2000|0 { "o" : { "a" : 1, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 1.2 } } shard0000 test.foo_object
test.foo_object-o_{ a: 2.0, b: 1.2 } 1000|6 { "o" : { "a" : 2, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 4.5 } } shard0001 test.foo_object
test.foo_object-o_{ a: 2.0, b: 4.5 } 1000|4 { "o" : { "a" : 2, "b" : 4.5 } } -> { "o" : { $maxKey : 1 } } shard0001 test.foo_object
test.foo_oid_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef003") } shard0001 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003') 2000|0 { "_id" : ObjectId("4fd9793f6f3da92d7ddef003") } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef006") } shard0000 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef006') 1000|6 { "_id" : ObjectId("4fd9793f6f3da92d7ddef006") } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef008") } shard0001 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef008') 1000|4 { "_id" : ObjectId("4fd9793f6f3da92d7ddef008") } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_oid_id
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:28 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:28 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:28 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:28 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:28 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_oid_id",
"count" : 6,
"numExtents" : 2,
"size" : 144,
"storageSize" : 16384,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 16352
},
"avgObjSize" : 24,
"nindexes" : 1,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_oid_id",
"count" : 3,
"size" : 72,
"avgObjSize" : 24,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 1,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_oid_id",
"count" : 3,
"size" : 72,
"avgObjSize" : 24,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 1,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
#### Now Testing oid_other ####
m30999| Thu Jun 14 01:40:28 [conn] CMD: shardcollection: { shardcollection: "test.foo_oid_other", key: { o: 1.0 } }
m30999| Thu Jun 14 01:40:28 [conn] enable sharding on: test.foo_oid_other with shard key: { o: 1.0 }
m30999| Thu Jun 14 01:40:28 [conn] going to create 1 chunk(s) for: test.foo_oid_other using new epoch 4fd9794c8a26dcf9048e3fc6
m30999| Thu Jun 14 01:40:28 [conn] ChunkManager: time to load chunks for test.foo_oid_other: 0ms sequenceNumber: 47 version: 1|0||4fd9794c8a26dcf9048e3fc6 based on: (empty)
m30999| Thu Jun 14 01:40:28 [conn] resetting shard version of test.foo_oid_other on localhost:30000, version is zero
m30001| Thu Jun 14 01:40:28 [conn4] build index test.foo_oid_other { _id: 1 }
m30001| Thu Jun 14 01:40:28 [conn4] build index done. scanned 0 total records. 0.012 secs
m30001| Thu Jun 14 01:40:28 [conn4] info: creating collection test.foo_oid_other on add index
m30001| Thu Jun 14 01:40:28 [conn4] build index test.foo_oid_other { o: 1.0 }
m30001| Thu Jun 14 01:40:28 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:28 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:28 [conn] splitting: test.foo_oid_other shard: ns:test.foo_oid_other at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { o: MinKey } max: { o: MaxKey }
m30001| Thu Jun 14 01:40:28 [conn4] request split points lookup for chunk test.foo_oid_other { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:28 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_other", keyPattern: { o: 1.0 }, min: { o: MinKey }, max: { o: MaxKey }, from: "shard0001", splitKeys: [ { o: ObjectId('4fd9793f6f3da92d7ddef009') } ], shardId: "test.foo_oid_other-o_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:28 [conn4] created new distributed lock for test.foo_oid_other on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794c7271f2fe6d09db3d
m30001| Thu Jun 14 01:40:28 [conn4] splitChunk accepted at version 1|0||4fd9794c8a26dcf9048e3fc6
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428285), what: "split", ns: "test.foo_oid_other", details: { before: { min: { o: MinKey }, max: { o: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: MinKey }, max: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') }, right: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') } } }
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:28 [conn] ChunkManager: time to load chunks for test.foo_oid_other: 0ms sequenceNumber: 48 version: 1|2||4fd9794c8a26dcf9048e3fc6 based on: 1|0||4fd9794c8a26dcf9048e3fc6
m30999| Thu Jun 14 01:40:28 [conn] splitting: test.foo_oid_other shard: ns:test.foo_oid_other at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { o: ObjectId('4fd9793f6f3da92d7ddef009') } max: { o: MaxKey }
m30001| Thu Jun 14 01:40:28 [conn4] request split points lookup for chunk test.foo_oid_other { : ObjectId('4fd9793f6f3da92d7ddef009') } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:28 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_other", keyPattern: { o: 1.0 }, min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: MaxKey }, from: "shard0001", splitKeys: [ { o: ObjectId('4fd9793f6f3da92d7ddef00e') } ], shardId: "test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef009')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:28 [conn4] created new distributed lock for test.foo_oid_other on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794c7271f2fe6d09db3e
m30001| Thu Jun 14 01:40:28 [conn4] splitChunk accepted at version 1|2||4fd9794c8a26dcf9048e3fc6
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428288), what: "split", ns: "test.foo_oid_other", details: { before: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') }, right: { min: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }, max: { o: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') } } }
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:28 [conn] ChunkManager: time to load chunks for test.foo_oid_other: 0ms sequenceNumber: 49 version: 1|4||4fd9794c8a26dcf9048e3fc6 based on: 1|2||4fd9794c8a26dcf9048e3fc6
m30999| Thu Jun 14 01:40:28 [conn] splitting: test.foo_oid_other shard: ns:test.foo_oid_other at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { o: ObjectId('4fd9793f6f3da92d7ddef009') } max: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }
m30999| Thu Jun 14 01:40:28 [conn] ChunkManager: time to load chunks for test.foo_oid_other: 0ms sequenceNumber: 50 version: 1|6||4fd9794c8a26dcf9048e3fc6 based on: 1|4||4fd9794c8a26dcf9048e3fc6
m30001| Thu Jun 14 01:40:28 [conn4] request split points lookup for chunk test.foo_oid_other { : ObjectId('4fd9793f6f3da92d7ddef009') } -->> { : ObjectId('4fd9793f6f3da92d7ddef00e') }
m30001| Thu Jun 14 01:40:28 [conn4] received splitChunk request: { splitChunk: "test.foo_oid_other", keyPattern: { o: 1.0 }, min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }, from: "shard0001", splitKeys: [ { o: ObjectId('4fd9793f6f3da92d7ddef00c') } ], shardId: "test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef009')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:28 [conn4] created new distributed lock for test.foo_oid_other on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794c7271f2fe6d09db3f
m30001| Thu Jun 14 01:40:28 [conn4] splitChunk accepted at version 1|4||4fd9794c8a26dcf9048e3fc6
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428292), what: "split", ns: "test.foo_oid_other", details: { before: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') }, right: { min: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00e') }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd9794c8a26dcf9048e3fc6') } } }
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30999| Thu Jun 14 01:40:28 [conn] CMD: movechunk: { movechunk: "test.foo_oid_other", find: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:28 [conn] moving chunk ns: test.foo_oid_other moving ( ns:test.foo_oid_other at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { o: ObjectId('4fd9793f6f3da92d7ddef009') } max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:28 [conn4] received moveChunk request: { moveChunk: "test.foo_oid_other", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, maxChunkSizeBytes: 52428800, shardId: "test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef009')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:28 [conn4] created new distributed lock for test.foo_oid_other on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:28 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' acquired, ts : 4fd9794c7271f2fe6d09db40
m30001| Thu Jun 14 01:40:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:28-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652428294), what: "moveChunk.start", ns: "test.foo_oid_other", details: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk request accepted at version 1|6||4fd9794c8a26dcf9048e3fc6
m30001| Thu Jun 14 01:40:28 [conn4] moveChunk number of documents: 3
m30000| Thu Jun 14 01:40:28 [migrateThread] build index test.foo_oid_other { _id: 1 }
m30000| Thu Jun 14 01:40:28 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:28 [migrateThread] info: creating collection test.foo_oid_other on add index
m30000| Thu Jun 14 01:40:28 [migrateThread] build index test.foo_oid_other { o: 1.0 }
m30000| Thu Jun 14 01:40:28 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_oid_other' { o: ObjectId('4fd9793f6f3da92d7ddef009') } -> { o: ObjectId('4fd9793f6f3da92d7ddef00c') }
m30001| Thu Jun 14 01:40:29 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo_oid_other", from: "localhost:30001", min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, shardKeyPattern: { o: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 111, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:29 [conn4] moveChunk setting version to: 2|0||4fd9794c8a26dcf9048e3fc6
m30000| Thu Jun 14 01:40:29 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo_oid_other' { o: ObjectId('4fd9793f6f3da92d7ddef009') } -> { o: ObjectId('4fd9793f6f3da92d7ddef00c') }
m30000| Thu Jun 14 01:40:29 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:29-9", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652429305), what: "moveChunk.to", ns: "test.foo_oid_other", details: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:40:29 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo_oid_other", from: "localhost:30001", min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, shardKeyPattern: { o: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 111, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:29 [conn4] moveChunk updating self version to: 2|1||4fd9794c8a26dcf9048e3fc6 through { o: MinKey } -> { o: ObjectId('4fd9793f6f3da92d7ddef009') } for collection 'test.foo_oid_other'
m30001| Thu Jun 14 01:40:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:29-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652429309), what: "moveChunk.commit", ns: "test.foo_oid_other", details: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:29 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:29 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:29 [conn4] distributed lock 'test.foo_oid_other/domU-12-31-39-01-70-B4:30001:1339652417:126826176' unlocked.
m30001| Thu Jun 14 01:40:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:29-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44608", time: new Date(1339652429310), what: "moveChunk.from", ns: "test.foo_oid_other", details: { min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:29 [conn4] command admin.$cmd command: { moveChunk: "test.foo_oid_other", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { o: ObjectId('4fd9793f6f3da92d7ddef009') }, max: { o: ObjectId('4fd9793f6f3da92d7ddef00c') }, maxChunkSizeBytes: 52428800, shardId: "test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef009')", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:68 r:5451 w:850053 reslen:37 1016ms
m30999| Thu Jun 14 01:40:29 [conn] ChunkManager: time to load chunks for test.foo_oid_other: 0ms sequenceNumber: 51 version: 2|1||4fd9794c8a26dcf9048e3fc6 based on: 1|6||4fd9794c8a26dcf9048e3fc6
ShardingTest test.foo_compound-o.a_MinKeyo.b_MinKey 2000|1 { "o.a" : { $minKey : 1 }, "o.b" : { $minKey : 1 } } -> { "o.a" : 1, "o.b" : 1.2 } shard0001 test.foo_compound
test.foo_compound-o.a_1.0o.b_1.2 2000|0 { "o.a" : 1, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 1.2 } shard0000 test.foo_compound
test.foo_compound-o.a_2.0o.b_1.2 1000|6 { "o.a" : 2, "o.b" : 1.2 } -> { "o.a" : 2, "o.b" : 4.5 } shard0001 test.foo_compound
test.foo_compound-o.a_2.0o.b_4.5 1000|4 { "o.a" : 2, "o.b" : 4.5 } -> { "o.a" : { $maxKey : 1 }, "o.b" : { $maxKey : 1 } } shard0001 test.foo_compound
test.foo_date-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : ISODate("1970-01-01T00:16:40Z") } shard0001 test.foo_date
test.foo_date-a_new Date(1000000) 2000|0 { "a" : ISODate("1970-01-01T00:16:40Z") } -> { "a" : ISODate("1970-01-01T01:06:40Z") } shard0000 test.foo_date
test.foo_date-a_new Date(4000000) 1000|6 { "a" : ISODate("1970-01-01T01:06:40Z") } -> { "a" : ISODate("1970-01-01T01:40:00Z") } shard0001 test.foo_date
test.foo_date-a_new Date(6000000) 1000|4 { "a" : ISODate("1970-01-01T01:40:00Z") } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_date
test.foo_double-a_MinKey 2000|1 { "a" : { $minKey : 1 } } -> { "a" : 1.2 } shard0001 test.foo_double
test.foo_double-a_1.2 2000|0 { "a" : 1.2 } -> { "a" : 4.6 } shard0000 test.foo_double
test.foo_double-a_4.6 1000|6 { "a" : 4.6 } -> { "a" : 9.9 } shard0001 test.foo_double
test.foo_double-a_9.9 1000|4 { "a" : 9.9 } -> { "a" : { $maxKey : 1 } } shard0001 test.foo_double
test.foo_embedded 1-a.b_MinKey 2000|1 { "a.b" : { $minKey : 1 } } -> { "a.b" : "allan" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"allan" 2000|0 { "a.b" : "allan" } -> { "a.b" : "joe" } shard0000 test.foo_embedded 1
test.foo_embedded 1-a.b_"joe" 1000|6 { "a.b" : "joe" } -> { "a.b" : "sara" } shard0001 test.foo_embedded 1
test.foo_embedded 1-a.b_"sara" 1000|4 { "a.b" : "sara" } -> { "a.b" : { $maxKey : 1 } } shard0001 test.foo_embedded 1
test.foo_embedded 2-a.b.c_MinKey 2000|1 { "a.b.c" : { $minKey : 1 } } -> { "a.b.c" : "allan" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"allan" 2000|0 { "a.b.c" : "allan" } -> { "a.b.c" : "joe" } shard0000 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"joe" 1000|6 { "a.b.c" : "joe" } -> { "a.b.c" : "sara" } shard0001 test.foo_embedded 2
test.foo_embedded 2-a.b.c_"sara" 1000|4 { "a.b.c" : "sara" } -> { "a.b.c" : { $maxKey : 1 } } shard0001 test.foo_embedded 2
test.foo_object-o_MinKey 2000|1 { "o" : { $minKey : 1 } } -> { "o" : { "a" : 1, "b" : 1.2 } } shard0001 test.foo_object
test.foo_object-o_{ a: 1.0, b: 1.2 } 2000|0 { "o" : { "a" : 1, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 1.2 } } shard0000 test.foo_object
test.foo_object-o_{ a: 2.0, b: 1.2 } 1000|6 { "o" : { "a" : 2, "b" : 1.2 } } -> { "o" : { "a" : 2, "b" : 4.5 } } shard0001 test.foo_object
test.foo_object-o_{ a: 2.0, b: 4.5 } 1000|4 { "o" : { "a" : 2, "b" : 4.5 } } -> { "o" : { $maxKey : 1 } } shard0001 test.foo_object
test.foo_oid_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef003") } shard0001 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef003') 2000|0 { "_id" : ObjectId("4fd9793f6f3da92d7ddef003") } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef006") } shard0000 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef006') 1000|6 { "_id" : ObjectId("4fd9793f6f3da92d7ddef006") } -> { "_id" : ObjectId("4fd9793f6f3da92d7ddef008") } shard0001 test.foo_oid_id
test.foo_oid_id-_id_ObjectId('4fd9793f6f3da92d7ddef008') 1000|4 { "_id" : ObjectId("4fd9793f6f3da92d7ddef008") } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_oid_id
test.foo_oid_other-o_MinKey 2000|1 { "o" : { $minKey : 1 } } -> { "o" : ObjectId("4fd9793f6f3da92d7ddef009") } shard0001 test.foo_oid_other
test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef009') 2000|0 { "o" : ObjectId("4fd9793f6f3da92d7ddef009") } -> { "o" : ObjectId("4fd9793f6f3da92d7ddef00c") } shard0000 test.foo_oid_other
test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef00c') 1000|6 { "o" : ObjectId("4fd9793f6f3da92d7ddef00c") } -> { "o" : ObjectId("4fd9793f6f3da92d7ddef00e") } shard0001 test.foo_oid_other
test.foo_oid_other-o_ObjectId('4fd9793f6f3da92d7ddef00e') 1000|4 { "o" : ObjectId("4fd9793f6f3da92d7ddef00e") } -> { "o" : { $maxKey : 1 } } shard0001 test.foo_oid_other
test.foo_string-k_MinKey 2000|1 { "k" : { $minKey : 1 } } -> { "k" : "allan" } shard0001 test.foo_string
test.foo_string-k_"allan" 2000|0 { "k" : "allan" } -> { "k" : "joe" } shard0000 test.foo_string
test.foo_string-k_"joe" 1000|6 { "k" : "joe" } -> { "k" : "sara" } shard0001 test.foo_string
test.foo_string-k_"sara" 1000|4 { "k" : "sara" } -> { "k" : { $maxKey : 1 } } shard0001 test.foo_string
test.foo_string_id-_id_MinKey 2000|1 { "_id" : { $minKey : 1 } } -> { "_id" : "allan" } shard0001 test.foo_string_id
test.foo_string_id-_id_"allan" 2000|0 { "_id" : "allan" } -> { "_id" : "joe" } shard0000 test.foo_string_id
test.foo_string_id-_id_"joe" 1000|6 { "_id" : "joe" } -> { "_id" : "sara" } shard0001 test.foo_string_id
test.foo_string_id-_id_"sara" 1000|4 { "_id" : "sara" } -> { "_id" : { $maxKey : 1 } } shard0001 test.foo_string_id
m30000| Thu Jun 14 01:40:29 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:29 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:29 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:40:29 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:40:29 [conn] warning: mongos collstats doesn't know about: userFlags
{
"sharded" : true,
"ns" : "test.foo_oid_other",
"count" : 6,
"numExtents" : 2,
"size" : 240,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
"_id_" : 16352,
"o_1" : 16352
},
"avgObjSize" : 40,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
"shard0000" : {
"ns" : "test.foo_oid_other",
"count" : 3,
"size" : 120,
"avgObjSize" : 40,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o_1" : 8176
},
"ok" : 1
},
"shard0001" : {
"ns" : "test.foo_oid_other",
"count" : 3,
"size" : 120,
"avgObjSize" : 40,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 2,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"o_1" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
m30999| Thu Jun 14 01:40:29 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:40:29 [conn4] end connection 127.0.0.1:56698 (10 connections now open)
m30000| Thu Jun 14 01:40:29 [conn3] end connection 127.0.0.1:56694 (10 connections now open)
m30000| Thu Jun 14 01:40:29 [conn6] end connection 127.0.0.1:56700 (8 connections now open)
m30000| Thu Jun 14 01:40:29 [conn7] end connection 127.0.0.1:56703 (7 connections now open)
m30001| Thu Jun 14 01:40:29 [conn3] end connection 127.0.0.1:44607 (4 connections now open)
m30000| Thu Jun 14 01:40:29 [conn11] end connection 127.0.0.1:56710 (6 connections now open)
m30001| Thu Jun 14 01:40:29 [conn4] end connection 127.0.0.1:44608 (4 connections now open)
Thu Jun 14 01:40:30 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:40:30 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:40:30 [interruptThread] now exiting
m30000| Thu Jun 14 01:40:30 dbexit:
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:40:30 [interruptThread] closing listening socket: 35
m30000| Thu Jun 14 01:40:30 [interruptThread] closing listening socket: 36
m30000| Thu Jun 14 01:40:30 [interruptThread] closing listening socket: 37
m30000| Thu Jun 14 01:40:30 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:30 [conn5] end connection 127.0.0.1:44611 (2 connections now open)
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:40:30 [conn10] end connection 127.0.0.1:56709 (5 connections now open)
m30000| Thu Jun 14 01:40:30 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:40:30 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:40:30 dbexit: really exiting now
Thu Jun 14 01:40:31 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:40:31 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:40:31 [interruptThread] now exiting
m30001| Thu Jun 14 01:40:31 dbexit:
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:40:31 [interruptThread] closing listening socket: 38
m30001| Thu Jun 14 01:40:31 [interruptThread] closing listening socket: 39
m30001| Thu Jun 14 01:40:31 [interruptThread] closing listening socket: 40
m30001| Thu Jun 14 01:40:31 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:40:31 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:40:31 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:40:31 dbexit: really exiting now
Thu Jun 14 01:40:32 shell: stopped mongo program on port 30001
*** ShardingTest key_many completed successfully in 17.197 seconds ***
17263.981104ms
Thu Jun 14 01:40:32 [initandlisten] connection accepted from 127.0.0.1:35179 #40 (27 connections now open)
*******************************************
Test : key_string.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/key_string.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/key_string.js";TestData.testFile = "key_string.js";TestData.testName = "key_string";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:40:32 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/keystring0'
Thu Jun 14 01:40:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/keystring0
m30000| Thu Jun 14 01:40:32
m30000| Thu Jun 14 01:40:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:40:32
m30000| Thu Jun 14 01:40:32 [initandlisten] MongoDB starting : pid=26401 port=30000 dbpath=/data/db/keystring0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:40:32 [initandlisten]
m30000| Thu Jun 14 01:40:32 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:40:32 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:40:32 [initandlisten]
m30000| Thu Jun 14 01:40:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:40:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:40:32 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:40:32 [initandlisten]
m30000| Thu Jun 14 01:40:32 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:40:32 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:40:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:40:32 [initandlisten] options: { dbpath: "/data/db/keystring0", port: 30000 }
m30000| Thu Jun 14 01:40:32 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:40:32 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/keystring1'
m30000| Thu Jun 14 01:40:32 [initandlisten] connection accepted from 127.0.0.1:56713 #1 (1 connection now open)
Thu Jun 14 01:40:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/keystring1
m30001| Thu Jun 14 01:40:32
m30001| Thu Jun 14 01:40:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:40:32
m30001| Thu Jun 14 01:40:32 [initandlisten] MongoDB starting : pid=26414 port=30001 dbpath=/data/db/keystring1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:40:32 [initandlisten]
m30001| Thu Jun 14 01:40:32 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:40:32 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:40:32 [initandlisten]
m30001| Thu Jun 14 01:40:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:40:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:40:32 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:40:32 [initandlisten]
m30001| Thu Jun 14 01:40:32 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:40:32 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:40:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:40:32 [initandlisten] options: { dbpath: "/data/db/keystring1", port: 30001 }
m30001| Thu Jun 14 01:40:32 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:40:32 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:40:32 [initandlisten] connection accepted from 127.0.0.1:44618 #1 (1 connection now open)
m30000| Thu Jun 14 01:40:32 [initandlisten] connection accepted from 127.0.0.1:56716 #2 (2 connections now open)
ShardingTest keystring :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:40:32 [FileAllocator] allocating new datafile /data/db/keystring0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:40:32 [FileAllocator] creating directory /data/db/keystring0/_tmp
Thu Jun 14 01:40:32 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Thu Jun 14 01:40:32 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:40:32 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26429 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:40:32 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:40:32 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:40:32 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:40:32 [initandlisten] connection accepted from 127.0.0.1:56718 #3 (3 connections now open)
m30000| Thu Jun 14 01:40:33 [FileAllocator] done allocating datafile /data/db/keystring0/config.ns, size: 16MB, took 0.304 secs
m30000| Thu Jun 14 01:40:33 [FileAllocator] allocating new datafile /data/db/keystring0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:40:33 [FileAllocator] done allocating datafile /data/db/keystring0/config.0, size: 16MB, took 0.368 secs
m30000| Thu Jun 14 01:40:33 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn2] insert config.settings keyUpdates:0 locks(micros) w:694235 694ms
m30000| Thu Jun 14 01:40:33 [FileAllocator] allocating new datafile /data/db/keystring0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:40:33 [initandlisten] connection accepted from 127.0.0.1:56722 #4 (4 connections now open)
m30000| Thu Jun 14 01:40:33 [initandlisten] connection accepted from 127.0.0.1:56723 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:33 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:40:33 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:40:33 [conn4] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:33 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:40:33 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:40:33 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:40:33 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:40:33 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:40:33
m30999| Thu Jun 14 01:40:33 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:40:33 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [initandlisten] connection accepted from 127.0.0.1:56724 #6 (6 connections now open)
m30000| Thu Jun 14 01:40:33 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:33 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652433:1804289383' acquired, ts : 4fd97951dc0bd7a287570b97
m30999| Thu Jun 14 01:40:33 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652433:1804289383' unlocked.
m30999| Thu Jun 14 01:40:33 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652433:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:40:33 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:33 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:40:33 [mongosMain] connection accepted from 127.0.0.1:53606 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:40:33 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:40:33 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:40:33 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:33 [conn] put [admin] on: config:localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30000| Thu Jun 14 01:40:34 [conn5] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:5 r:326 w:1459 reslen:177 532ms
m30000| Thu Jun 14 01:40:34 [FileAllocator] done allocating datafile /data/db/keystring0/config.1, size: 32MB, took 0.662 secs
m30999| Thu Jun 14 01:40:34 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30001| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:44629 #2 (2 connections now open)
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:40:34 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:40:34 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:40:34 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:40:34 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:40:34 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { name: 1.0 } }
m30999| Thu Jun 14 01:40:34 [conn] enable sharding on: test.foo with shard key: { name: 1.0 }
m30001| Thu Jun 14 01:40:34 [FileAllocator] allocating new datafile /data/db/keystring1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:34 [FileAllocator] creating directory /data/db/keystring1/_tmp
m30999| Thu Jun 14 01:40:34 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd97952dc0bd7a287570b98
m30999| Thu Jun 14 01:40:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd97952dc0bd7a287570b98 based on: (empty)
m30999| Thu Jun 14 01:40:34 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97951dc0bd7a287570b96
m30999| Thu Jun 14 01:40:34 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:40:34 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97951dc0bd7a287570b96
m30000| Thu Jun 14 01:40:34 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:40:34 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:56727 #7 (7 connections now open)
m30001| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:44631 #3 (3 connections now open)
m30001| Thu Jun 14 01:40:34 [FileAllocator] done allocating datafile /data/db/keystring1/test.ns, size: 16MB, took 0.4 secs
m30001| Thu Jun 14 01:40:34 [FileAllocator] allocating new datafile /data/db/keystring1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:40:34 [FileAllocator] done allocating datafile /data/db/keystring1/test.0, size: 16MB, took 0.281 secs
m30001| Thu Jun 14 01:40:34 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:40:34 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:34 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:40:34 [conn2] build index test.foo { name: 1.0 }
m30001| Thu Jun 14 01:40:34 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:34 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97951dc0bd7a287570b96'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:74 reslen:51 690ms
m30000| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:56729 #8 (8 connections now open)
m30001| Thu Jun 14 01:40:34 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:10 W:64 r:295 w:693379 693ms
m30001| Thu Jun 14 01:40:34 [FileAllocator] allocating new datafile /data/db/keystring1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:40:34 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:44633 #4 (4 connections now open)
m30999| Thu Jun 14 01:40:34 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { name: MinKey } max: { name: MaxKey }
m30001| Thu Jun 14 01:40:34 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30000| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:56731 #9 (9 connections now open)
m30001| Thu Jun 14 01:40:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: MinKey }, max: { name: MaxKey }, from: "shard0001", splitKeys: [ { name: "allan" } ], shardId: "test.foo-name_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:34 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652434:660813478 (sleeping for 30000ms)
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' acquired, ts : 4fd979524174cb05840fb7e8
m30001| Thu Jun 14 01:40:34 [conn4] splitChunk accepted at version 1|0||4fd97952dc0bd7a287570b98
m30001| Thu Jun 14 01:40:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:34-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652434938), what: "split", ns: "test.foo", details: { before: { min: { name: MinKey }, max: { name: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: MinKey }, max: { name: "allan" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') }, right: { min: { name: "allan" }, max: { name: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') } } }
m30000| Thu Jun 14 01:40:34 [FileAllocator] allocating new datafile /data/db/keystring0/test.ns, filling with zeroes...
m30999| Thu Jun 14 01:40:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd97952dc0bd7a287570b98 based on: 1|0||4fd97952dc0bd7a287570b98
m30999| Thu Jun 14 01:40:34 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { name: "allan" } max: { name: MaxKey }
m30999| Thu Jun 14 01:40:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd97952dc0bd7a287570b98 based on: 1|2||4fd97952dc0bd7a287570b98
m30999| Thu Jun 14 01:40:34 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { name: "allan" } max: { name: "sara" }
m30999| Thu Jun 14 01:40:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd97952dc0bd7a287570b98 based on: 1|4||4fd97952dc0bd7a287570b98
m30999| Thu Jun 14 01:40:34 [conn] CMD: movechunk: { movechunk: "test.foo", find: { name: "allan" }, to: "localhost:30000" }
m30999| Thu Jun 14 01:40:34 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { name: "allan" } max: { name: "joe" }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' unlocked.
m30001| Thu Jun 14 01:40:34 [conn4] request split points lookup for chunk test.foo { : "allan" } -->> { : MaxKey }
m30001| Thu Jun 14 01:40:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: "allan" }, max: { name: MaxKey }, from: "shard0001", splitKeys: [ { name: "sara" } ], shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' acquired, ts : 4fd979524174cb05840fb7e9
m30001| Thu Jun 14 01:40:34 [conn4] splitChunk accepted at version 1|2||4fd97952dc0bd7a287570b98
m30001| Thu Jun 14 01:40:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:34-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652434943), what: "split", ns: "test.foo", details: { before: { min: { name: "allan" }, max: { name: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: "allan" }, max: { name: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') }, right: { min: { name: "sara" }, max: { name: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') } } }
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' unlocked.
m30001| Thu Jun 14 01:40:34 [conn4] request split points lookup for chunk test.foo { : "allan" } -->> { : "sara" }
m30001| Thu Jun 14 01:40:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { name: 1.0 }, min: { name: "allan" }, max: { name: "sara" }, from: "shard0001", splitKeys: [ { name: "joe" } ], shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' acquired, ts : 4fd979524174cb05840fb7ea
m30001| Thu Jun 14 01:40:34 [conn4] splitChunk accepted at version 1|4||4fd97952dc0bd7a287570b98
m30001| Thu Jun 14 01:40:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:34-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652434946), what: "split", ns: "test.foo", details: { before: { min: { name: "allan" }, max: { name: "sara" }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { name: "allan" }, max: { name: "joe" }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') }, right: { min: { name: "joe" }, max: { name: "sara" }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97952dc0bd7a287570b98') } } }
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' unlocked.
m30001| Thu Jun 14 01:40:34 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: "allan" }, max: { name: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_"allan"", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' acquired, ts : 4fd979524174cb05840fb7eb
m30001| Thu Jun 14 01:40:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:34-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652434949), what: "moveChunk.start", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:34 [conn4] moveChunk request accepted at version 1|6||4fd97952dc0bd7a287570b98
m30001| Thu Jun 14 01:40:34 [conn4] moveChunk number of documents: 3
m30001| Thu Jun 14 01:40:34 [initandlisten] connection accepted from 127.0.0.1:44635 #5 (5 connections now open)
m30001| Thu Jun 14 01:40:35 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:36 [FileAllocator] done allocating datafile /data/db/keystring1/test.1, size: 32MB, took 1.123 secs
m30000| Thu Jun 14 01:40:36 [FileAllocator] done allocating datafile /data/db/keystring0/test.ns, size: 16MB, took 1.099 secs
m30000| Thu Jun 14 01:40:36 [FileAllocator] allocating new datafile /data/db/keystring0/test.0, filling with zeroes...
m30000| Thu Jun 14 01:40:36 [FileAllocator] done allocating datafile /data/db/keystring0/test.0, size: 16MB, took 0.281 secs
m30000| Thu Jun 14 01:40:36 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:40:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:36 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:40:36 [migrateThread] build index test.foo { name: 1.0 }
m30000| Thu Jun 14 01:40:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:36 [FileAllocator] allocating new datafile /data/db/keystring0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:40:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: "allan" } -> { name: "joe" }
m30001| Thu Jun 14 01:40:36 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 112, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:36 [conn4] moveChunk setting version to: 2|0||4fd97952dc0bd7a287570b98
m30000| Thu Jun 14 01:40:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { name: "allan" } -> { name: "joe" }
m30000| Thu Jun 14 01:40:36 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:36-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652436966), what: "moveChunk.to", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, step1 of 5: 1391, step2 of 5: 0, step3 of 5: 4, step4 of 5: 0, step5 of 5: 619 } }
m30000| Thu Jun 14 01:40:36 [initandlisten] connection accepted from 127.0.0.1:56733 #10 (10 connections now open)
m30001| Thu Jun 14 01:40:36 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { name: "allan" }, max: { name: "joe" }, shardKeyPattern: { name: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 112, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:36 [conn4] moveChunk updating self version to: 2|1||4fd97952dc0bd7a287570b98 through { name: MinKey } -> { name: "allan" } for collection 'test.foo'
m30001| Thu Jun 14 01:40:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:36-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652436970), what: "moveChunk.commit", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:36 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:36 [conn4] moveChunk deleted: 3
m30001| Thu Jun 14 01:40:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652434:660813478' unlocked.
m30001| Thu Jun 14 01:40:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:36-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44633", time: new Date(1339652436971), what: "moveChunk.from", ns: "test.foo", details: { min: { name: "allan" }, max: { name: "joe" }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2007, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:40:36 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { name: "allan" }, max: { name: "joe" }, maxChunkSizeBytes: 52428800, shardId: "test.foo-name_"allan"", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:382 w:341 reslen:37 2023ms
m30999| Thu Jun 14 01:40:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 2|1||4fd97952dc0bd7a287570b98 based on: 1|6||4fd97952dc0bd7a287570b98
ShardingTest test.foo-name_MinKey 2000|1 { "name" : { $minKey : 1 } } -> { "name" : "allan" } shard0001 test.foo
test.foo-name_"allan" 2000|0 { "name" : "allan" } -> { "name" : "joe" } shard0000 test.foo
test.foo-name_"joe" 1000|6 { "name" : "joe" } -> { "name" : "sara" } shard0001 test.foo
test.foo-name_"sara" 1000|4 { "name" : "sara" } -> { "name" : { $maxKey : 1 } } shard0001 test.foo
m30000| Thu Jun 14 01:40:36 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:36 [conn] splitting: test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 2|0||000000000000000000000000 min: { name: "allan" } max: { name: "joe" }
m30999| Thu Jun 14 01:40:36 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { name: "joe" } max: { name: "sara" }
m30999| Thu Jun 14 01:40:36 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:40:36 [conn3] end connection 127.0.0.1:56718 (9 connections now open)
m30000| Thu Jun 14 01:40:36 [conn4] end connection 127.0.0.1:56722 (8 connections now open)
m30000| Thu Jun 14 01:40:36 [conn6] end connection 127.0.0.1:56724 (7 connections now open)
m30000| Thu Jun 14 01:40:36 [conn7] end connection 127.0.0.1:56727 (6 connections now open)
m30001| Thu Jun 14 01:40:36 [conn3] end connection 127.0.0.1:44631 (4 connections now open)
m30001| Thu Jun 14 01:40:36 [conn4] end connection 127.0.0.1:44633 (4 connections now open)
m30000| Thu Jun 14 01:40:36 [FileAllocator] done allocating datafile /data/db/keystring0/test.1, size: 32MB, took 0.649 secs
Thu Jun 14 01:40:37 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:40:37 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:40:37 [interruptThread] now exiting
m30000| Thu Jun 14 01:40:37 dbexit:
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:40:37 [interruptThread] closing listening socket: 36
m30000| Thu Jun 14 01:40:37 [interruptThread] closing listening socket: 37
m30000| Thu Jun 14 01:40:37 [interruptThread] closing listening socket: 38
m30000| Thu Jun 14 01:40:37 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:40:37 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:40:37 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:40:37 dbexit: really exiting now
m30001| Thu Jun 14 01:40:37 [conn5] end connection 127.0.0.1:44635 (2 connections now open)
Thu Jun 14 01:40:38 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:40:38 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:40:38 [interruptThread] now exiting
m30001| Thu Jun 14 01:40:38 dbexit:
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:40:38 [interruptThread] closing listening socket: 39
m30001| Thu Jun 14 01:40:38 [interruptThread] closing listening socket: 40
m30001| Thu Jun 14 01:40:38 [interruptThread] closing listening socket: 41
m30001| Thu Jun 14 01:40:38 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:40:38 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:40:38 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:40:38 dbexit: really exiting now
Thu Jun 14 01:40:39 shell: stopped mongo program on port 30001
*** ShardingTest keystring completed successfully in 7.588 seconds ***
7633.559942ms
Thu Jun 14 01:40:40 [initandlisten] connection accepted from 127.0.0.1:35202 #41 (28 connections now open)
*******************************************
Test : limit_push.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/limit_push.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/limit_push.js";TestData.testFile = "limit_push.js";TestData.testName = "limit_push";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:40:40 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/limit_push0'
Thu Jun 14 01:40:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/limit_push0
m30000| Thu Jun 14 01:40:40
m30000| Thu Jun 14 01:40:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:40:40
m30000| Thu Jun 14 01:40:40 [initandlisten] MongoDB starting : pid=26475 port=30000 dbpath=/data/db/limit_push0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:40:40 [initandlisten]
m30000| Thu Jun 14 01:40:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:40:40 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:40:40 [initandlisten]
m30000| Thu Jun 14 01:40:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:40:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:40:40 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:40:40 [initandlisten]
m30000| Thu Jun 14 01:40:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:40:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:40:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:40:40 [initandlisten] options: { dbpath: "/data/db/limit_push0", port: 30000 }
m30000| Thu Jun 14 01:40:40 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:40:40 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/limit_push1'
m30000| Thu Jun 14 01:40:40 [initandlisten] connection accepted from 127.0.0.1:56736 #1 (1 connection now open)
Thu Jun 14 01:40:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/limit_push1
m30001| Thu Jun 14 01:40:40
m30001| Thu Jun 14 01:40:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:40:40
m30001| Thu Jun 14 01:40:40 [initandlisten] MongoDB starting : pid=26488 port=30001 dbpath=/data/db/limit_push1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:40:40 [initandlisten]
m30001| Thu Jun 14 01:40:40 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:40:40 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:40:40 [initandlisten]
m30001| Thu Jun 14 01:40:40 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:40:40 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:40:40 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:40:40 [initandlisten]
m30001| Thu Jun 14 01:40:40 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:40:40 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:40:40 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:40:40 [initandlisten] options: { dbpath: "/data/db/limit_push1", port: 30001 }
m30001| Thu Jun 14 01:40:40 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:40:40 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:40:40 [initandlisten] connection accepted from 127.0.0.1:44641 #1 (1 connection now open)
ShardingTest limit_push :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:40:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30999| Thu Jun 14 01:40:40 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:40:40 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26502 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:40:40 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:40:40 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:40:40 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:40:40 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:40:40 [mongosMain] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:40:40 [initandlisten] connection accepted from 127.0.0.1:56739 #2 (2 connections now open)
m30000| Thu Jun 14 01:40:40 [FileAllocator] allocating new datafile /data/db/limit_push0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:40:40 [FileAllocator] creating directory /data/db/limit_push0/_tmp
m30999| Thu Jun 14 01:40:40 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:40 [mongosMain] connected connection!
m30000| Thu Jun 14 01:40:40 [initandlisten] connection accepted from 127.0.0.1:56741 #3 (3 connections now open)
m30000| Thu Jun 14 01:40:40 [FileAllocator] done allocating datafile /data/db/limit_push0/config.ns, size: 16MB, took 0.319 secs
m30000| Thu Jun 14 01:40:40 [FileAllocator] allocating new datafile /data/db/limit_push0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:40:41 [FileAllocator] done allocating datafile /data/db/limit_push0/config.0, size: 16MB, took 0.32 secs
m30000| Thu Jun 14 01:40:41 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn2] insert config.settings keyUpdates:0 locks(micros) w:651910 651ms
m30000| Thu Jun 14 01:40:41 [FileAllocator] allocating new datafile /data/db/limit_push0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:56745 #4 (4 connections now open)
m30000| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:56746 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:41 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:40:41 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:56747 #6 (6 connections now open)
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:40:41 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [mongosMain] connected connection!
m30999| Thu Jun 14 01:40:41 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:40:41 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:40:41 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:40:41 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:40:41 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:40:41 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:40:41 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:40:41 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:40:41 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:40:41
m30999| Thu Jun 14 01:40:41 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:40:41 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [Balancer] connected connection!
m30000| Thu Jun 14 01:40:41 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:40:41 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:40:41 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:41 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:40:41 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:40:41 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:40:41 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:40:41 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652441:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652441:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652441:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:40:41 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97959a03f616781fe0fe2" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:40:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652441:1804289383' acquired, ts : 4fd97959a03f616781fe0fe2
m30999| Thu Jun 14 01:40:41 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:40:41 [Balancer] no collections to balance
m30999| Thu Jun 14 01:40:41 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:40:41 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:40:41 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652441:1804289383' unlocked.
m30999| Thu Jun 14 01:40:41 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652441:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:40:41 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:40:41 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652441:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:40:41 [mongosMain] connection accepted from 127.0.0.1:53629 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:40:41 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:40:41 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:40:41 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:41 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:40:41 [FileAllocator] done allocating datafile /data/db/limit_push0/config.1, size: 32MB, took 0.587 secs
m30000| Thu Jun 14 01:40:41 [conn5] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:368 w:1658 reslen:177 424ms
m30999| Thu Jun 14 01:40:41 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:40:41 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [conn] connected connection!
m30001| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:44652 #2 (2 connections now open)
m30999| Thu Jun 14 01:40:41 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:40:41 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:40:41 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30000| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:56750 #7 (7 connections now open)
m30999| Thu Jun 14 01:40:41 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:40:41 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [conn] connected connection!
m30999| Thu Jun 14 01:40:41 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97959a03f616781fe0fe1
m30999| Thu Jun 14 01:40:41 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:40:41 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:41 [conn] connected connection!
m30999| Thu Jun 14 01:40:41 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97959a03f616781fe0fe1
m30999| Thu Jun 14 01:40:41 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:40:41 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:40:41 [initandlisten] connection accepted from 127.0.0.1:44654 #3 (3 connections now open)
m30001| Thu Jun 14 01:40:41 [FileAllocator] allocating new datafile /data/db/limit_push1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:41 [FileAllocator] creating directory /data/db/limit_push1/_tmp
m30001| Thu Jun 14 01:40:42 [FileAllocator] done allocating datafile /data/db/limit_push1/test.ns, size: 16MB, took 0.405 secs
m30001| Thu Jun 14 01:40:42 [FileAllocator] allocating new datafile /data/db/limit_push1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:40:42 [FileAllocator] done allocating datafile /data/db/limit_push1/test.0, size: 16MB, took 0.327 secs
m30001| Thu Jun 14 01:40:42 [FileAllocator] allocating new datafile /data/db/limit_push1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:40:42 [conn3] build index test.limit_push { _id: 1 }
m30001| Thu Jun 14 01:40:42 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:42 [conn3] insert test.limit_push keyUpdates:0 locks(micros) W:51 w:746081 745ms
m30001| Thu Jun 14 01:40:42 [conn3] build index test.limit_push { x: 1.0 }
m30001| Thu Jun 14 01:40:42 [conn3] build index done. scanned 100 total records. 0 secs
m30001| Thu Jun 14 01:40:42 [initandlisten] connection accepted from 127.0.0.1:44655 #4 (4 connections now open)
m30000| Thu Jun 14 01:40:42 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:40:42 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:42 [initandlisten] connection accepted from 127.0.0.1:56753 #8 (8 connections now open)
m30000| Thu Jun 14 01:40:42 [initandlisten] connection accepted from 127.0.0.1:56754 #9 (9 connections now open)
m30001| Thu Jun 14 01:40:42 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:40:42 [conn4] received splitChunk request: { splitChunk: "test.limit_push", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 50.0 } ], shardId: "test.limit_push-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:42 [conn4] created new distributed lock for test.limit_push on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:42 [conn4] distributed lock 'test.limit_push/domU-12-31-39-01-70-B4:30001:1339652442:392978882' acquired, ts : 4fd9795a844a0d40557ad4bd
m30001| Thu Jun 14 01:40:42 [conn4] splitChunk accepted at version 1|0||4fd9795aa03f616781fe0fe3
m30001| Thu Jun 14 01:40:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:42-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44655", time: new Date(1339652442536), what: "split", ns: "test.limit_push", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 50.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd9795aa03f616781fe0fe3') }, right: { min: { x: 50.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd9795aa03f616781fe0fe3') } } }
m30001| Thu Jun 14 01:40:42 [conn4] distributed lock 'test.limit_push/domU-12-31-39-01-70-B4:30001:1339652442:392978882' unlocked.
m30001| Thu Jun 14 01:40:42 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652442:392978882 (sleeping for 30000ms)
m30999| Thu Jun 14 01:40:42 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:40:42 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:40:42 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:42 [conn] connected connection!
m30999| Thu Jun 14 01:40:42 [conn] CMD: shardcollection: { shardcollection: "test.limit_push", key: { x: 1.0 } }
m30999| Thu Jun 14 01:40:42 [conn] enable sharding on: test.limit_push with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:40:42 [conn] going to create 1 chunk(s) for: test.limit_push using new epoch 4fd9795aa03f616781fe0fe3
m30999| Thu Jun 14 01:40:42 [conn] ChunkManager: time to load chunks for test.limit_push: 0ms sequenceNumber: 2 version: 1|0||4fd9795aa03f616781fe0fe3 based on: (empty)
m30999| Thu Jun 14 01:40:42 [conn] setShardVersion shard0001 localhost:30001 test.limit_push { setShardVersion: "test.limit_push", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), serverID: ObjectId('4fd97959a03f616781fe0fe1'), shard: "shard0001", shardHost: "localhost:30001" } 0x982bb18
m30999| Thu Jun 14 01:40:42 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.limit_push", need_authoritative: true, errmsg: "first time for collection 'test.limit_push'", ok: 0.0 }
m30999| Thu Jun 14 01:40:42 [conn] setShardVersion shard0001 localhost:30001 test.limit_push { setShardVersion: "test.limit_push", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), serverID: ObjectId('4fd97959a03f616781fe0fe1'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x982bb18
m30999| Thu Jun 14 01:40:42 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:40:42 [conn] splitting: test.limit_push shard: ns:test.limit_push at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey }
m30999| Thu Jun 14 01:40:42 [conn] ChunkManager: time to load chunks for test.limit_push: 0ms sequenceNumber: 3 version: 1|2||4fd9795aa03f616781fe0fe3 based on: 1|0||4fd9795aa03f616781fe0fe3
m30999| Thu Jun 14 01:40:42 [conn] CMD: movechunk: { moveChunk: "test.limit_push", find: { x: 51.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:40:42 [conn] moving chunk ns: test.limit_push moving ( ns:test.limit_push at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 50.0 } max: { x: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:40:42 [conn4] received moveChunk request: { moveChunk: "test.limit_push", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 50.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.limit_push-x_50.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:40:42 [conn4] created new distributed lock for test.limit_push on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:40:42 [conn4] distributed lock 'test.limit_push/domU-12-31-39-01-70-B4:30001:1339652442:392978882' acquired, ts : 4fd9795a844a0d40557ad4be
m30001| Thu Jun 14 01:40:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:42-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44655", time: new Date(1339652442539), what: "moveChunk.start", ns: "test.limit_push", details: { min: { x: 50.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:42 [conn4] moveChunk request accepted at version 1|2||4fd9795aa03f616781fe0fe3
m30001| Thu Jun 14 01:40:42 [conn4] moveChunk number of documents: 50
m30000| Thu Jun 14 01:40:42 [initandlisten] connection accepted from 127.0.0.1:56755 #10 (10 connections now open)
m30001| Thu Jun 14 01:40:42 [initandlisten] connection accepted from 127.0.0.1:44659 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:42 [FileAllocator] allocating new datafile /data/db/limit_push0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:43 [FileAllocator] done allocating datafile /data/db/limit_push1/test.1, size: 32MB, took 0.956 secs
m30000| Thu Jun 14 01:40:43 [FileAllocator] done allocating datafile /data/db/limit_push0/test.ns, size: 16MB, took 0.935 secs
m30000| Thu Jun 14 01:40:43 [FileAllocator] allocating new datafile /data/db/limit_push0/test.0, filling with zeroes...
m30001| Thu Jun 14 01:40:43 [conn4] moveChunk data transfer progress: { active: true, ns: "test.limit_push", from: "localhost:30001", min: { x: 50.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:40:43 [FileAllocator] done allocating datafile /data/db/limit_push0/test.0, size: 16MB, took 0.264 secs
m30000| Thu Jun 14 01:40:43 [migrateThread] build index test.limit_push { _id: 1 }
m30000| Thu Jun 14 01:40:43 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:43 [migrateThread] info: creating collection test.limit_push on add index
m30000| Thu Jun 14 01:40:43 [migrateThread] build index test.limit_push { x: 1.0 }
m30000| Thu Jun 14 01:40:43 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:43 [FileAllocator] allocating new datafile /data/db/limit_push0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:40:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.limit_push' { x: 50.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:40:44 [FileAllocator] done allocating datafile /data/db/limit_push0/test.1, size: 32MB, took 0.629 secs
m30001| Thu Jun 14 01:40:44 [conn4] moveChunk data transfer progress: { active: true, ns: "test.limit_push", from: "localhost:30001", min: { x: 50.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 50, clonedBytes: 1450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:40:44 [conn4] moveChunk setting version to: 2|0||4fd9795aa03f616781fe0fe3
m30000| Thu Jun 14 01:40:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.limit_push' { x: 50.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:40:44 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:44-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652444562), what: "moveChunk.to", ns: "test.limit_push", details: { min: { x: 50.0 }, max: { x: MaxKey }, step1 of 5: 1210, step2 of 5: 0, step3 of 5: 33, step4 of 5: 0, step5 of 5: 777 } }
m30000| Thu Jun 14 01:40:44 [initandlisten] connection accepted from 127.0.0.1:56757 #11 (11 connections now open)
m30001| Thu Jun 14 01:40:44 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.limit_push", from: "localhost:30001", min: { x: 50.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 50, clonedBytes: 1450, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:40:44 [conn4] moveChunk updating self version to: 2|1||4fd9795aa03f616781fe0fe3 through { x: MinKey } -> { x: 50.0 } for collection 'test.limit_push'
m30001| Thu Jun 14 01:40:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:44-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44655", time: new Date(1339652444566), what: "moveChunk.commit", ns: "test.limit_push", details: { min: { x: 50.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:40:44 [conn4] doing delete inline
m30001| Thu Jun 14 01:40:44 [conn4] moveChunk deleted: 50
m30001| Thu Jun 14 01:40:44 [conn4] distributed lock 'test.limit_push/domU-12-31-39-01-70-B4:30001:1339652442:392978882' unlocked.
m30001| Thu Jun 14 01:40:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:44-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:44655", time: new Date(1339652444569), what: "moveChunk.from", ns: "test.limit_push", details: { min: { x: 50.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2009, step5 of 6: 16, step6 of 6: 1 } }
m30001| Thu Jun 14 01:40:44 [conn4] command admin.$cmd command: { moveChunk: "test.limit_push", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 50.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.limit_push-x_50.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:628 w:1607 reslen:37 2030ms
m30999| Thu Jun 14 01:40:44 [conn] moveChunk result: { ok: 1.0 }
m30000| Thu Jun 14 01:40:44 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:44 [conn] ChunkManager: time to load chunks for test.limit_push: 0ms sequenceNumber: 4 version: 2|1||4fd9795aa03f616781fe0fe3 based on: 1|2||4fd9795aa03f616781fe0fe3
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion shard0000 localhost:30000 test.limit_push { setShardVersion: "test.limit_push", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), serverID: ObjectId('4fd97959a03f616781fe0fe1'), shard: "shard0000", shardHost: "localhost:30000" } 0x982a1d0
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.limit_push", need_authoritative: true, errmsg: "first time for collection 'test.limit_push'", ok: 0.0 }
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion shard0000 localhost:30000 test.limit_push { setShardVersion: "test.limit_push", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), serverID: ObjectId('4fd97959a03f616781fe0fe1'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x982a1d0
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion shard0001 localhost:30001 test.limit_push { setShardVersion: "test.limit_push", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), serverID: ObjectId('4fd97959a03f616781fe0fe1'), shard: "shard0001", shardHost: "localhost:30001" } 0x982bb18
m30999| Thu Jun 14 01:40:44 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd9795aa03f616781fe0fe3'), ok: 1.0 }
{
"clusteredType" : "ParallelSort",
"shards" : {
"localhost:30000" : [
{
"cursor" : "BtreeCursor x_1 reverse",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"x" : [
[
60,
-1.7976931348623157e+308
]
]
},
"server" : "domU-12-31-39-01-70-B4:30000"
}
],
"localhost:30001" : [
{
"cursor" : "BtreeCursor x_1 reverse",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"x" : [
[
60,
-1.7976931348623157e+308
]
]
},
"server" : "domU-12-31-39-01-70-B4:30001"
}
]
},
"cursor" : "BtreeCursor x_1 reverse",
"n" : 2,
"nChunkSkips" : 0,
"nYields" : 0,
"nscanned" : 2,
"nscannedObjects" : 2,
"millisShardTotal" : 0,
"millisShardAvg" : 0,
"numQueries" : 2,
"numShards" : 2,
"millis" : 0
}
m30999| Thu Jun 14 01:40:44 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:40:44 [conn4] end connection 127.0.0.1:56745 (10 connections now open)
m30000| Thu Jun 14 01:40:44 [conn6] end connection 127.0.0.1:56747 (10 connections now open)
m30000| Thu Jun 14 01:40:44 [conn3] end connection 127.0.0.1:56741 (8 connections now open)
m30000| Thu Jun 14 01:40:44 [conn7] end connection 127.0.0.1:56750 (7 connections now open)
m30001| Thu Jun 14 01:40:44 [conn4] end connection 127.0.0.1:44655 (4 connections now open)
m30001| Thu Jun 14 01:40:44 [conn3] end connection 127.0.0.1:44654 (4 connections now open)
Thu Jun 14 01:40:45 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:40:45 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:40:45 [interruptThread] now exiting
m30000| Thu Jun 14 01:40:45 dbexit:
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:40:45 [interruptThread] closing listening socket: 37
m30000| Thu Jun 14 01:40:45 [interruptThread] closing listening socket: 38
m30000| Thu Jun 14 01:40:45 [interruptThread] closing listening socket: 39
m30000| Thu Jun 14 01:40:45 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:45 [conn5] end connection 127.0.0.1:44659 (2 connections now open)
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:40:45 [conn11] end connection 127.0.0.1:56757 (6 connections now open)
m30000| Thu Jun 14 01:40:45 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:40:45 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:40:45 dbexit: really exiting now
Thu Jun 14 01:40:46 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:40:46 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:40:46 [interruptThread] now exiting
m30001| Thu Jun 14 01:40:46 dbexit:
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:40:46 [interruptThread] closing listening socket: 40
m30001| Thu Jun 14 01:40:46 [interruptThread] closing listening socket: 41
m30001| Thu Jun 14 01:40:46 [interruptThread] closing listening socket: 42
m30001| Thu Jun 14 01:40:46 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:40:46 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:40:46 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:40:46 dbexit: really exiting now
Thu Jun 14 01:40:47 shell: stopped mongo program on port 30001
*** ShardingTest limit_push completed successfully in 7.561 seconds ***
7605.918169ms
Thu Jun 14 01:40:47 [initandlisten] connection accepted from 127.0.0.1:35226 #42 (29 connections now open)
*******************************************
Test : major_version_check.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/major_version_check.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/major_version_check.js";TestData.testFile = "major_version_check.js";TestData.testName = "major_version_check";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:40:47 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:40:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:40:47
m30000| Thu Jun 14 01:40:47 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:40:47
m30000| Thu Jun 14 01:40:47 [initandlisten] MongoDB starting : pid=26551 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:40:47 [initandlisten]
m30000| Thu Jun 14 01:40:47 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:40:47 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:40:47 [initandlisten]
m30000| Thu Jun 14 01:40:47 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:40:47 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:40:47 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:40:47 [initandlisten]
m30000| Thu Jun 14 01:40:47 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:40:47 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:40:47 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:40:47 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:40:47 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:40:47 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:40:47 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m30000| Thu Jun 14 01:40:47 [initandlisten] connection accepted from 127.0.0.1:56760 #1 (1 connection now open)
m29000| Thu Jun 14 01:40:47
m29000| Thu Jun 14 01:40:47 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:40:47
m29000| Thu Jun 14 01:40:47 [initandlisten] MongoDB starting : pid=26564 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:40:47 [initandlisten]
m29000| Thu Jun 14 01:40:47 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:40:47 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:40:47 [initandlisten]
m29000| Thu Jun 14 01:40:47 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:40:47 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:40:47 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:40:47 [initandlisten]
m29000| Thu Jun 14 01:40:47 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:40:47 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:40:47 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:40:47 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:40:47 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:40:47 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:40:47 [websvr] ERROR: addr already in use
"localhost:29000"
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000
]
}
Thu Jun 14 01:40:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:54996 #1 (1 connection now open)
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:54997 #2 (2 connections now open)
m30999| Thu Jun 14 01:40:48 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:40:48 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26577 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:40:48 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:40:48 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:40:48 [mongosMain] options: { configdb: "localhost:29000", port: 30999 }
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:54998 #3 (3 connections now open)
m29000| Thu Jun 14 01:40:48 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:40:48 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29000| Thu Jun 14 01:40:48 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.236 secs
m29000| Thu Jun 14 01:40:48 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:40:48 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.282 secs
m29000| Thu Jun 14 01:40:48 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:40:48 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn2] insert config.settings keyUpdates:0 locks(micros) w:563644 563ms
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:55002 #4 (4 connections now open)
m29000| Thu Jun 14 01:40:48 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:40:48 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:40:48 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:40:48 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:48 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:40:48 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:40:48 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:40:48 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:40:48 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:40:48
m30999| Thu Jun 14 01:40:48 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:55003 #5 (5 connections now open)
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:55004 #6 (6 connections now open)
m29000| Thu Jun 14 01:40:48 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652448:1804289383' acquired, ts : 4fd979609068822bcd780e39
m30999| Thu Jun 14 01:40:48 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339652448:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:40:48 [conn6] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:40:48 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:48 [conn6] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:40:48 [conn6] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:40:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652448:1804289383' unlocked.
m30999| Thu Jun 14 01:40:48 [mongosMain] connection accepted from 127.0.0.1:54131 #1 (1 connection now open)
Thu Jun 14 01:40:48 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:29000
m30998| Thu Jun 14 01:40:48 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:40:48 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26600 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:40:48 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:40:48 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:40:48 [mongosMain] options: { configdb: "localhost:29000", port: 30998 }
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:44191 #7 (7 connections now open)
m30998| Thu Jun 14 01:40:48 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:40:48 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:40:48 [Balancer] about to contact config servers and shards
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:44193 #8 (8 connections now open)
m30998| Thu Jun 14 01:40:48 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:40:48 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:40:48
m30998| Thu Jun 14 01:40:48 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:40:48 [initandlisten] connection accepted from 127.0.0.1:44194 #9 (9 connections now open)
m30998| Thu Jun 14 01:40:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652448:1804289383' acquired, ts : 4fd979607ee068111e124221
m30998| Thu Jun 14 01:40:48 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652448:1804289383' unlocked.
m30998| Thu Jun 14 01:40:48 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30998:1339652448:1804289383 (sleeping for 30000ms)
ShardingTest undefined going to add shard : localhost:30000
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60075 #2 (2 connections now open)
{ "shardAdded" : "shard0000", "ok" : 1 }
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60076 #3 (3 connections now open)
m30998| Thu Jun 14 01:40:49 [mongosMain] connection accepted from 127.0.0.1:41953 #1 (1 connection now open)
m30999| Thu Jun 14 01:40:49 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:40:49 [conn] put [admin] on: config:localhost:29000
m30999| Thu Jun 14 01:40:49 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30999| Thu Jun 14 01:40:49 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979609068822bcd780e38
m30999| Thu Jun 14 01:40:49 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd979609068822bcd780e38
m29000| Thu Jun 14 01:40:49 [conn6] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:40:49 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:44198 #10 (10 connections now open)
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60078 #4 (4 connections now open)
{ "ok" : 1 }
m30999| Thu Jun 14 01:40:49 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:40:49 [conn] put [foo] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:40:49 [conn] enabling sharding on: foo
m30999| Thu Jun 14 01:40:49 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:40:49 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30000| Thu Jun 14 01:40:49 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:40:49 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Thu Jun 14 01:40:49 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd979619068822bcd780e3a
m30999| Thu Jun 14 01:40:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd979619068822bcd780e3a based on: (empty)
m29000| Thu Jun 14 01:40:49 [conn6] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:40:49 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:40:49 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.655 secs
m30000| Thu Jun 14 01:40:49 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.278 secs
m30000| Thu Jun 14 01:40:49 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:40:49 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.255 secs
m30000| Thu Jun 14 01:40:49 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:40:49 [conn4] build index foo.bar { _id: 1 }
m30000| Thu Jun 14 01:40:49 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:49 [conn4] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:40:49 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:116 r:456 w:833567 833ms
m30000| Thu Jun 14 01:40:49 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979619068822bcd780e3a'), serverID: ObjectId('4fd979609068822bcd780e38'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:75 reslen:171 833ms
m30000| Thu Jun 14 01:40:49 [conn3] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:44200 #11 (11 connections now open)
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30998| Thu Jun 14 01:40:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd979619068822bcd780e3a based on: (empty)
m30998| Thu Jun 14 01:40:49 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979607ee068111e124220
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60080 #5 (5 connections now open)
m30999| Thu Jun 14 01:40:49 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m29000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:44202 #12 (12 connections now open)
m30000| Thu Jun 14 01:40:49 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:40:49 [conn4] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
{ "ok" : 1 }
{
"version" : Timestamp(1000, 2),
"versionEpoch" : ObjectId("4fd979619068822bcd780e3a"),
"ok" : 1
}
m30000| Thu Jun 14 01:40:49 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652449:1149909308' acquired, ts : 4fd97961d8ef7c2d55da4953
m30000| Thu Jun 14 01:40:49 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30000:1339652449:1149909308 (sleeping for 30000ms)
m30000| Thu Jun 14 01:40:49 [conn4] splitChunk accepted at version 1|0||4fd979619068822bcd780e3a
m30000| Thu Jun 14 01:40:49 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:40:49-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60078", time: new Date(1339652449925), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979619068822bcd780e3a') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979619068822bcd780e3a') } } }
m30000| Thu Jun 14 01:40:49 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652449:1149909308' unlocked.
m30999| Thu Jun 14 01:40:49 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd979619068822bcd780e3a based on: 1|0||4fd979619068822bcd780e3a
m30999| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }
m30999| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m29000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:44203 #13 (13 connections now open)
{
"version" : Timestamp(1000, 0),
"versionEpoch" : ObjectId("4fd979619068822bcd780e3a"),
"ok" : 1
}
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30999| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }
m30999| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
{
"version" : Timestamp(1000, 0),
"versionEpoch" : ObjectId("4fd979619068822bcd780e3a"),
"ok" : 1
}
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60083 #6 (6 connections now open)
{
"version" : Timestamp(1000, 0),
"versionEpoch" : ObjectId("4fd979619068822bcd780e3a"),
"ok" : 1
}
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30998| Thu Jun 14 01:40:49 [mongosMain] connection accepted from 127.0.0.1:41963 #2 (2 connections now open)
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30998| Thu Jun 14 01:40:49 [conn] ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Thu Jun 14 01:40:49 [initandlisten] connection accepted from 127.0.0.1:60085 #7 (7 connections now open)
m30999| Thu Jun 14 01:40:49 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:40:49 [conn6] end connection 127.0.0.1:55004 (12 connections now open)
m29000| Thu Jun 14 01:40:49 [conn4] end connection 127.0.0.1:55002 (11 connections now open)
m29000| Thu Jun 14 01:40:49 [conn10] end connection 127.0.0.1:44198 (10 connections now open)
m29000| Thu Jun 14 01:40:49 [conn3] end connection 127.0.0.1:54998 (9 connections now open)
m30000| Thu Jun 14 01:40:49 [conn3] end connection 127.0.0.1:60076 (6 connections now open)
m30000| Thu Jun 14 01:40:49 [conn4] end connection 127.0.0.1:60078 (5 connections now open)
m29000| Thu Jun 14 01:40:49 [conn5] end connection 127.0.0.1:55003 (8 connections now open)
m30000| Thu Jun 14 01:40:50 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.718 secs
Thu Jun 14 01:40:50 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:40:50 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:40:50 [conn7] end connection 127.0.0.1:44191 (7 connections now open)
m29000| Thu Jun 14 01:40:50 [conn8] end connection 127.0.0.1:44193 (6 connections now open)
m30000| Thu Jun 14 01:40:50 [conn5] end connection 127.0.0.1:60080 (4 connections now open)
m30000| Thu Jun 14 01:40:50 [conn7] end connection 127.0.0.1:60085 (3 connections now open)
m29000| Thu Jun 14 01:40:50 [conn9] end connection 127.0.0.1:44194 (5 connections now open)
Thu Jun 14 01:40:51 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:40:51 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:40:51 [interruptThread] now exiting
m30000| Thu Jun 14 01:40:51 dbexit:
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:40:51 [interruptThread] closing listening socket: 38
m30000| Thu Jun 14 01:40:51 [interruptThread] closing listening socket: 39
m30000| Thu Jun 14 01:40:51 [interruptThread] closing listening socket: 40
m30000| Thu Jun 14 01:40:51 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:40:51 [conn12] end connection 127.0.0.1:44202 (4 connections now open)
m29000| Thu Jun 14 01:40:51 [conn13] end connection 127.0.0.1:44203 (3 connections now open)
m30000| Thu Jun 14 01:40:51 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:40:51 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:40:51 [conn11] end connection 127.0.0.1:44200 (2 connections now open)
m30000| Thu Jun 14 01:40:51 dbexit: really exiting now
Thu Jun 14 01:40:52 shell: stopped mongo program on port 30000
m29000| Thu Jun 14 01:40:52 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:40:52 [interruptThread] now exiting
m29000| Thu Jun 14 01:40:52 dbexit:
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:40:52 [interruptThread] closing listening socket: 41
m29000| Thu Jun 14 01:40:52 [interruptThread] closing listening socket: 42
m29000| Thu Jun 14 01:40:52 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:40:52 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:40:52 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:40:52 dbexit: really exiting now
Thu Jun 14 01:40:53 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 6.286 seconds ***
6341.459990ms
Thu Jun 14 01:40:53 [initandlisten] connection accepted from 127.0.0.1:34742 #43 (30 connections now open)
*******************************************
Test : mapReduce.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mapReduce.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mapReduce.js";TestData.testFile = "mapReduce.js";TestData.testName = "mapReduce";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:40:53 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/mrShard0'
Thu Jun 14 01:40:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/mrShard0
m30000| Thu Jun 14 01:40:54
m30000| Thu Jun 14 01:40:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:40:54
m30000| Thu Jun 14 01:40:54 [initandlisten] MongoDB starting : pid=26647 port=30000 dbpath=/data/db/mrShard0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:40:54 [initandlisten]
m30000| Thu Jun 14 01:40:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:40:54 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:40:54 [initandlisten]
m30000| Thu Jun 14 01:40:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:40:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:40:54 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:40:54 [initandlisten]
m30000| Thu Jun 14 01:40:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:40:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:40:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:40:54 [initandlisten] options: { dbpath: "/data/db/mrShard0", port: 30000 }
m30000| Thu Jun 14 01:40:54 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:40:54 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/mrShard1'
m30000| Thu Jun 14 01:40:54 [initandlisten] connection accepted from 127.0.0.1:60088 #1 (1 connection now open)
Thu Jun 14 01:40:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/mrShard1
m30001| Thu Jun 14 01:40:54
m30001| Thu Jun 14 01:40:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:40:54
m30001| Thu Jun 14 01:40:54 [initandlisten] MongoDB starting : pid=26660 port=30001 dbpath=/data/db/mrShard1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:40:54 [initandlisten]
m30001| Thu Jun 14 01:40:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:40:54 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:40:54 [initandlisten]
m30001| Thu Jun 14 01:40:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:40:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:40:54 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:40:54 [initandlisten]
m30001| Thu Jun 14 01:40:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:40:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:40:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:40:54 [initandlisten] options: { dbpath: "/data/db/mrShard1", port: 30001 }
m30001| Thu Jun 14 01:40:54 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:40:54 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:40:54 [initandlisten] connection accepted from 127.0.0.1:48665 #1 (1 connection now open)
ShardingTest mrShard :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:40:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:40:54 [initandlisten] connection accepted from 127.0.0.1:60091 #2 (2 connections now open)
m30999| Thu Jun 14 01:40:54 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:40:54 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26674 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:40:54 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:40:54 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:40:54 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:40:54 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:40:54 [mongosMain] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:40:54 [FileAllocator] allocating new datafile /data/db/mrShard0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:40:54 [FileAllocator] creating directory /data/db/mrShard0/_tmp
m30999| Thu Jun 14 01:40:54 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:40:54 [initandlisten] connection accepted from 127.0.0.1:60093 #3 (3 connections now open)
m30999| Thu Jun 14 01:40:54 [mongosMain] connected connection!
m30000| Thu Jun 14 01:40:54 [FileAllocator] done allocating datafile /data/db/mrShard0/config.ns, size: 16MB, took 0.226 secs
m30000| Thu Jun 14 01:40:54 [FileAllocator] allocating new datafile /data/db/mrShard0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:40:55 [FileAllocator] done allocating datafile /data/db/mrShard0/config.0, size: 16MB, took 0.239 secs
m30000| Thu Jun 14 01:40:55 [FileAllocator] allocating new datafile /data/db/mrShard0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:40:55 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn2] insert config.settings keyUpdates:0 locks(micros) w:484908 484ms
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:40:55 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [CheckConfigServers] connected connection!
m30000| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:60096 #4 (4 connections now open)
m30999| Thu Jun 14 01:40:55 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [mongosMain] connected connection!
m30000| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:60097 #5 (5 connections now open)
m30000| Thu Jun 14 01:40:55 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:55 [mongosMain] MaxChunkSize: 1
m30999| Thu Jun 14 01:40:55 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:40:55 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:40:55 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:40:55 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:40:55 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:40:55 [conn4] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:40:55 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:40:55 [conn4] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:55 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:40:55 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:40:55
m30999| Thu Jun 14 01:40:55 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:40:55 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:40:55 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [Balancer] connected connection!
m30000| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:60098 #6 (6 connections now open)
m30999| Thu Jun 14 01:40:55 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:40:55 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652455:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:40:55 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:40:55 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:40:55 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97967607081b222f40290" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:40:55 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:40:55 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652455:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:40:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97967607081b222f40290
m30999| Thu Jun 14 01:40:55 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:40:55 [Balancer] no collections to balance
m30999| Thu Jun 14 01:40:55 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:40:55 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:40:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:40:55 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:40:55 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:40:55 [mongosMain] connection accepted from 127.0.0.1:54161 #1 (1 connection now open)
m30999| Thu Jun 14 01:40:55 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:40:55 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:55 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:40:55 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:40:55 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [conn] connected connection!
m30001| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:48675 #2 (2 connections now open)
m30999| Thu Jun 14 01:40:55 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:40:55 [conn] couldn't find database [mrShard] in config db
m30999| Thu Jun 14 01:40:55 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:40:55 [conn] put [mrShard] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:40:55 [conn] enabling sharding on: mrShard
m30999| Thu Jun 14 01:40:55 [conn] CMD: shardcollection: { shardcollection: "mrShard.srcSharded", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:40:55 [conn] enable sharding on: mrShard.srcSharded with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:40:55 [conn] going to create 1 chunk(s) for: mrShard.srcSharded using new epoch 4fd97967607081b222f40291
m30001| Thu Jun 14 01:40:55 [FileAllocator] allocating new datafile /data/db/mrShard1/mrShard.ns, filling with zeroes...
m30001| Thu Jun 14 01:40:55 [FileAllocator] creating directory /data/db/mrShard1/_tmp
m30999| Thu Jun 14 01:40:55 [conn] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 2 version: 1|0||4fd97967607081b222f40291 based on: (empty)
m30000| Thu Jun 14 01:40:55 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:40:55 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:40:55 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [conn] connected connection!
m30999| Thu Jun 14 01:40:55 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97967607081b222f4028f
m30999| Thu Jun 14 01:40:55 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: WriteBackListener-localhost:30000
m30000| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:60101 #7 (7 connections now open)
m30999| Thu Jun 14 01:40:55 [conn] resetting shard version of mrShard.srcSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:40:55 [conn] setShardVersion shard0000 localhost:30000 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:40:55 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:40:55 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:40:55 [conn] connected connection!
m30999| Thu Jun 14 01:40:55 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97967607081b222f4028f
m30999| Thu Jun 14 01:40:55 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:40:55 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:40:55 [initandlisten] connection accepted from 127.0.0.1:48677 #3 (3 connections now open)
m30000| Thu Jun 14 01:40:55 [FileAllocator] done allocating datafile /data/db/mrShard0/config.1, size: 32MB, took 0.705 secs
m30001| Thu Jun 14 01:40:56 [FileAllocator] done allocating datafile /data/db/mrShard1/mrShard.ns, size: 16MB, took 0.342 secs
m30001| Thu Jun 14 01:40:56 [FileAllocator] allocating new datafile /data/db/mrShard1/mrShard.0, filling with zeroes...
m30001| Thu Jun 14 01:40:56 [FileAllocator] done allocating datafile /data/db/mrShard1/mrShard.0, size: 16MB, took 0.381 secs
m30001| Thu Jun 14 01:40:56 [conn2] build index mrShard.srcSharded { _id: 1 }
m30001| Thu Jun 14 01:40:56 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:56 [conn2] info: creating collection mrShard.srcSharded on add index
m30001| Thu Jun 14 01:40:56 [conn2] insert mrShard.system.indexes keyUpdates:0 locks(micros) R:8 W:72 r:302 w:1298060 1298ms
m30001| Thu Jun 14 01:40:56 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:72 reslen:51 1295ms
m30001| Thu Jun 14 01:40:56 [FileAllocator] allocating new datafile /data/db/mrShard1/mrShard.1, filling with zeroes...
m30001| Thu Jun 14 01:40:56 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:40:56 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:40:56 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.srcSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.srcSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:40:56 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30000| Thu Jun 14 01:40:56 [initandlisten] connection accepted from 127.0.0.1:60103 #8 (8 connections now open)
m30999| Thu Jun 14 01:40:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:40:56 [conn3] build index mrShard.srcNonSharded { _id: 1 }
m30001| Thu Jun 14 01:40:56 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:40:57 [FileAllocator] done allocating datafile /data/db/mrShard1/mrShard.1, size: 32MB, took 0.667 secs
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 58312 splitThreshold: 921
m30999| Thu Jun 14 01:41:01 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:41:01 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:41:01 [conn] connected connection!
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:41:01 [initandlisten] connection accepted from 127.0.0.1:48679 #4 (4 connections now open)
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 220 splitThreshold: 921
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 220 splitThreshold: 921
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 220 splitThreshold: 921
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 220 splitThreshold: 921
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 220 splitThreshold: 921
m30001| Thu Jun 14 01:41:01 [conn4] request split points lookup for chunk mrShard.srcSharded { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:01 [conn4] max number of requested split points reached (2) before the end of chunk mrShard.srcSharded { : MinKey } -->> { : MaxKey }
m30000| Thu Jun 14 01:41:01 [initandlisten] connection accepted from 127.0.0.1:60105 #9 (9 connections now open)
m30001| Thu Jun 14 01:41:01 [conn4] received splitChunk request: { splitChunk: "mrShard.srcSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9796d56cc70fc67ed6799') } ], shardId: "mrShard.srcSharded-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:01 [conn4] created new distributed lock for mrShard.srcSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:01 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652461:305425140 (sleeping for 30000ms)
m30001| Thu Jun 14 01:41:01 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' acquired, ts : 4fd9796d78d0a44169ffafc5
m30001| Thu Jun 14 01:41:01 [conn4] splitChunk accepted at version 1|0||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:01 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:01-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48679", time: new Date(1339652461238), what: "split", ns: "mrShard.srcSharded", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291') }, right: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97967607081b222f40291') } } }
m30001| Thu Jun 14 01:41:01 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' unlocked.
m30999| Thu Jun 14 01:41:01 [conn] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 3 version: 1|2||4fd97967607081b222f40291 based on: 1|0||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:01 [conn] autosplitted mrShard.srcSharded shard: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } (splitThreshold 921)
m30999| Thu Jun 14 01:41:01 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:01 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97967607081b222f40291'), ok: 1.0 }
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 142346 splitThreshold: 471859
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:01 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30999| Thu Jun 14 01:41:01 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:02 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30999| Thu Jun 14 01:41:02 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:02 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30999| Thu Jun 14 01:41:02 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:02 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30999| Thu Jun 14 01:41:02 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:03 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } dataWritten: 94380 splitThreshold: 471859
m30001| Thu Jun 14 01:41:03 [conn4] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9796d56cc70fc67ed6799') } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:03 [conn4] max number of requested split points reached (2) before the end of chunk mrShard.srcSharded { : ObjectId('4fd9796d56cc70fc67ed6799') } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:03 [conn4] received splitChunk request: { splitChunk: "mrShard.srcSharded", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } ], shardId: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:03 [conn4] created new distributed lock for mrShard.srcSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:03 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' acquired, ts : 4fd9796f78d0a44169ffafc6
m30001| Thu Jun 14 01:41:03 [conn4] splitChunk accepted at version 1|2||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:03 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:03-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48679", time: new Date(1339652463052), what: "split", ns: "mrShard.srcSharded", details: { before: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291') }, right: { min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97967607081b222f40291') } } }
m30001| Thu Jun 14 01:41:03 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' unlocked.
m30999| Thu Jun 14 01:41:03 [conn] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 4 version: 1|4||4fd97967607081b222f40291 based on: 1|2||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:03 [conn] autosplitted mrShard.srcSharded shard: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: MaxKey } on: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:41:03 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:41:03 [conn] recently split chunk: { min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:41:03 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:03 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97967607081b222f40291'), ok: 1.0 }
m30999| Thu Jun 14 01:41:03 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188778 splitThreshold: 943718
m30999| Thu Jun 14 01:41:03 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:03 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30999| Thu Jun 14 01:41:03 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:04 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30999| Thu Jun 14 01:41:04 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:04 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30001| Thu Jun 14 01:41:04 [conn4] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9796f56cc70fc67ed99f9') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:04 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd9797056cc70fc67edc884') }
m30999| Thu Jun 14 01:41:05 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:41:05 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:41:05 [Balancer] connected connection!
m30999| Thu Jun 14 01:41:05 [Balancer] Refreshing MaxChunkSize: 1
m30000| Thu Jun 14 01:41:05 [initandlisten] connection accepted from 127.0.0.1:60106 #10 (10 connections now open)
m30999| Thu Jun 14 01:41:05 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:05 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97971607081b222f40292" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97967607081b222f40290" } }
m30999| Thu Jun 14 01:41:05 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97971607081b222f40292
m30999| Thu Jun 14 01:41:05 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:41:05 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:05 [Balancer] shard0000 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:05 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:05 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:05 [Balancer] shard0000
m30999| Thu Jun 14 01:41:05 [Balancer] shard0001
m30999| Thu Jun 14 01:41:05 [Balancer] { _id: "mrShard.srcSharded-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:05 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:05 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796f56cc70fc67ed99f9')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:05 [Balancer] ----
m30999| Thu Jun 14 01:41:05 [Balancer] collection : mrShard.srcSharded
m30999| Thu Jun 14 01:41:05 [Balancer] donor : 3 chunks on shard0001
m30999| Thu Jun 14 01:41:05 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:41:05 [Balancer] chose [shard0001] to [shard0000] { _id: "mrShard.srcSharded-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:05 [Balancer] moving chunk ns: mrShard.srcSharded moving ( ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:41:05 [conn4] received moveChunk request: { moveChunk: "mrShard.srcSharded", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, maxChunkSizeBytes: 1048576, shardId: "mrShard.srcSharded-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:05 [conn4] created new distributed lock for mrShard.srcSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:05 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' acquired, ts : 4fd9797178d0a44169ffafc7
m30001| Thu Jun 14 01:41:05 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:05-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48679", time: new Date(1339652465088), what: "moveChunk.start", ns: "mrShard.srcSharded", details: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:05 [conn4] moveChunk request accepted at version 1|4||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:05 [conn4] moveChunk number of documents: 0
m30001| Thu Jun 14 01:41:05 [initandlisten] connection accepted from 127.0.0.1:48682 #5 (5 connections now open)
m30000| Thu Jun 14 01:41:05 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShard.ns, filling with zeroes...
m30000| Thu Jun 14 01:41:05 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShard.ns, size: 16MB, took 0.247 secs
m30000| Thu Jun 14 01:41:05 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShard.0, filling with zeroes...
m30999| Thu Jun 14 01:41:05 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30999| Thu Jun 14 01:41:05 [conn] creating new connection to:localhost:30001
m30001| Thu Jun 14 01:41:05 [initandlisten] connection accepted from 127.0.0.1:48683 #6 (6 connections now open)
m30001| Thu Jun 14 01:41:05 [conn6] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9796f56cc70fc67ed99f9') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:05 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:41:05 [conn] connected connection!
m30999| Thu Jun 14 01:41:05 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd9797056cc70fc67edc884') }
m30000| Thu Jun 14 01:41:05 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShard.0, size: 16MB, took 0.294 secs
m30000| Thu Jun 14 01:41:05 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShard.1, filling with zeroes...
m30000| Thu Jun 14 01:41:05 [migrateThread] build index mrShard.srcSharded { _id: 1 }
m30000| Thu Jun 14 01:41:05 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:05 [migrateThread] info: creating collection mrShard.srcSharded on add index
m30000| Thu Jun 14 01:41:05 [migrateThread] migrate commit succeeded flushing to secondaries for 'mrShard.srcSharded' { _id: MinKey } -> { _id: ObjectId('4fd9796d56cc70fc67ed6799') }
m30001| Thu Jun 14 01:41:06 [conn4] moveChunk data transfer progress: { active: true, ns: "mrShard.srcSharded", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:41:06 [conn4] moveChunk setting version to: 2|0||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:06 [initandlisten] connection accepted from 127.0.0.1:48686 #7 (7 connections now open)
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000000'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000000 needVersion : 2|0||4fd97967607081b222f40291 mine : 1|4||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ae'), j: 69.0, i: 21.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:41:06 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connected connection!
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] resetting shard version of mrShard.srcSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion shard0000 localhost:30000 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9974370
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:41:06 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connected connection!
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9973890
m30000| Thu Jun 14 01:41:06 [initandlisten] connection accepted from 127.0.0.1:60109 #11 (11 connections now open)
m30000| Thu Jun 14 01:41:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'mrShard.srcSharded' { _id: MinKey } -> { _id: ObjectId('4fd9796d56cc70fc67ed6799') }
m30000| Thu Jun 14 01:41:06 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:06-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652466095), what: "moveChunk.to", ns: "mrShard.srcSharded", details: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, step1 of 5: 552, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 454 } }
m30000| Thu Jun 14 01:41:06 [initandlisten] connection accepted from 127.0.0.1:60110 #12 (12 connections now open)
m30999| Thu Jun 14 01:41:06 [Balancer] moveChunk result: { ok: 1.0 }
m30001| Thu Jun 14 01:41:06 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "mrShard.srcSharded", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:41:06 [conn4] moveChunk updating self version to: 2|1||4fd97967607081b222f40291 through { _id: ObjectId('4fd9796d56cc70fc67ed6799') } -> { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } for collection 'mrShard.srcSharded'
m30001| Thu Jun 14 01:41:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:06-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48679", time: new Date(1339652466104), what: "moveChunk.commit", ns: "mrShard.srcSharded", details: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:06 [conn4] doing delete inline
m30001| Thu Jun 14 01:41:06 [conn4] moveChunk deleted: 0
m30001| Thu Jun 14 01:41:06 [conn4] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' unlocked.
m30001| Thu Jun 14 01:41:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:06-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48679", time: new Date(1339652466105), what: "moveChunk.from", ns: "mrShard.srcSharded", details: { min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1005, step5 of 6: 10, step6 of 6: 0 } }
m30001| Thu Jun 14 01:41:06 [conn4] command admin.$cmd command: { moveChunk: "mrShard.srcSharded", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, maxChunkSizeBytes: 1048576, shardId: "mrShard.srcSharded-_id_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40160 w:44 reslen:37 1018ms
m30999| Thu Jun 14 01:41:06 [Balancer] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 5 version: 2|1||4fd97967607081b222f40291 based on: 1|4||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:41:06 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:06 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30999| Thu Jun 14 01:41:06 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97967607081b222f40291'), ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.srcSharded", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd97967607081b222f40291'), globalVersion: Timestamp 2000|0, globalVersionEpoch: ObjectId('4fd97967607081b222f40291'), reloadConfig: true, errmsg: "shard global version for collection is higher than trying to set to 'mrShard.srcSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9973890
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188775 splitThreshold: 943718
m30001| Thu Jun 14 01:41:06 [conn4] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9796f56cc70fc67ed99f9') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:06 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd9797056cc70fc67edc884') }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] insert will be retried b/c sharding config info is stale, retries: 0 ns: mrShard.srcSharded data: { _id: ObjectId('4fd9797256cc70fc67edf1ae'), j: 69.0, i: 21.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000001'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000001 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1af'), j: 69.0, i: 22.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000002'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000002 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b0'), j: 69.0, i: 23.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000003'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000003 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b1'), j: 69.0, i: 24.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000004'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000004 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b2'), j: 69.0, i: 25.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000005'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000005 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b3'), j: 69.0, i: 26.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000006'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000006 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b4'), j: 69.0, i: 27.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000007'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000007 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b5'), j: 69.0, i: 28.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000008'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000008 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b6'), j: 69.0, i: 29.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000009'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000009 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b7'), j: 69.0, i: 30.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000a needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b8'), j: 69.0, i: 31.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000b needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1b9'), j: 69.0, i: 32.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000c needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ba'), j: 69.0, i: 33.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000d needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1bb'), j: 69.0, i: 34.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000e needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1bc'), j: 69.0, i: 35.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000000f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000000f needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1bd'), j: 69.0, i: 36.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000010'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000010 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1be'), j: 69.0, i: 37.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000011'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000011 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1bf'), j: 69.0, i: 38.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000012'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000012 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c0'), j: 69.0, i: 39.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000013'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000013 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c1'), j: 69.0, i: 40.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000014'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000014 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c2'), j: 69.0, i: 41.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000015'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000015 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c3'), j: 69.0, i: 42.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000016'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000016 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c4'), j: 69.0, i: 43.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000017'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000017 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c5'), j: 69.0, i: 44.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000018'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000018 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c6'), j: 69.0, i: 45.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000019'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000019 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c7'), j: 69.0, i: 46.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001a needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c8'), j: 69.0, i: 47.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001b needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1c9'), j: 69.0, i: 48.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001c needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ca'), j: 69.0, i: 49.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001d needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1cb'), j: 69.0, i: 50.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001e needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1cc'), j: 69.0, i: 51.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000001f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000001f needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1cd'), j: 69.0, i: 52.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000020'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000020 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ce'), j: 69.0, i: 53.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000021'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000021 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1cf'), j: 69.0, i: 54.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000022'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000022 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d0'), j: 69.0, i: 55.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000023'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000023 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d1'), j: 69.0, i: 56.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000024'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000024 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d2'), j: 69.0, i: 57.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000025'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000025 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d3'), j: 69.0, i: 58.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000026'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000026 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d4'), j: 69.0, i: 59.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000027'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000027 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d5'), j: 69.0, i: 60.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000028'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000028 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d6'), j: 69.0, i: 61.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000029'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000029 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d7'), j: 69.0, i: 62.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002a needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d8'), j: 69.0, i: 63.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002b needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1d9'), j: 69.0, i: 64.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002c needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1da'), j: 69.0, i: 65.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002d needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1db'), j: 69.0, i: 66.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002e needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1dc'), j: 69.0, i: 67.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000002f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000002f needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1dd'), j: 69.0, i: 68.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000030'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000030 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1de'), j: 69.0, i: 69.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000031'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000031 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1df'), j: 69.0, i: 70.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000032'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000032 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e0'), j: 69.0, i: 71.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000033'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000033 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e1'), j: 69.0, i: 72.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000034'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000034 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e2'), j: 69.0, i: 73.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000035'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000035 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e3'), j: 69.0, i: 74.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000036'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000036 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e4'), j: 69.0, i: 75.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000037'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000037 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e5'), j: 69.0, i: 76.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000038'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000038 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e6'), j: 69.0, i: 77.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd979720000000000000039'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd979720000000000000039 needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e7'), j: 69.0, i: 78.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000003a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000003a needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e8'), j: 69.0, i: 79.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000003b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000003b needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1e9'), j: 69.0, i: 80.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000003c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000003c needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ea'), j: 69.0, i: 81.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000003d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000003d needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1eb'), j: 69.0, i: 82.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "mrShard.srcSharded", id: ObjectId('4fd97972000000000000003e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97967607081b222f40291'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97972000000000000003e needVersion : 2|0||4fd97967607081b222f40291 mine : 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] op: insert len: 83 ns: mrShard.srcSharded{ _id: ObjectId('4fd9797256cc70fc67edf1ec'), j: 69.0, i: 83.0 }
m30999| Thu Jun 14 01:41:06 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97967607081b222f40291, at version 2|1||4fd97967607081b222f40291
m30000| Thu Jun 14 01:41:06 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShard.1, size: 32MB, took 0.671 secs
m30999| Thu Jun 14 01:41:06 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30001| Thu Jun 14 01:41:06 [conn2] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9796f56cc70fc67ed99f9') } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:06 [conn2] max number of requested split points reached (2) before the end of chunk mrShard.srcSharded { : ObjectId('4fd9796f56cc70fc67ed99f9') } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:06 [conn2] received splitChunk request: { splitChunk: "mrShard.srcSharded", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd9797256cc70fc67ee03ae') } ], shardId: "mrShard.srcSharded-_id_ObjectId('4fd9796f56cc70fc67ed99f9')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:06 [conn2] created new distributed lock for mrShard.srcSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:06 [conn2] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' acquired, ts : 4fd9797278d0a44169ffafc8
m30001| Thu Jun 14 01:41:06 [conn2] splitChunk accepted at version 2|1||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:06 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:06-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48675", time: new Date(1339652466873), what: "split", ns: "mrShard.srcSharded", details: { before: { min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97967607081b222f40291') }, right: { min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291') } } }
m30001| Thu Jun 14 01:41:06 [conn2] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' unlocked.
m30999| Thu Jun 14 01:41:06 [conn] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 6 version: 2|3||4fd97967607081b222f40291 based on: 2|1||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:06 [conn] autosplitted mrShard.srcSharded shard: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } max: { _id: MaxKey } on: { _id: ObjectId('4fd9797256cc70fc67ee03ae') } (splitThreshold 943718) (migrate suggested)
m30999| Thu Jun 14 01:41:06 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:41:06 [conn] recently split chunk: { min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:41:06 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 2000|3, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:06 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97967607081b222f40291'), ok: 1.0 }
m30999| Thu Jun 14 01:41:06 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') } max: { _id: MaxKey } dataWritten: 188771 splitThreshold: 943718
m30999| Thu Jun 14 01:41:06 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:41:06 [conn2] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9797256cc70fc67ee03ae') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:07 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30001| Thu Jun 14 01:41:07 [conn2] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9797256cc70fc67ee03ae') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:07 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:41:07 [FileAllocator] allocating new datafile /data/db/mrShard1/mrShard.2, filling with zeroes...
m30001| Thu Jun 14 01:41:07 [conn2] request split points lookup for chunk mrShard.srcSharded { : ObjectId('4fd9797256cc70fc67ee03ae') } -->> { : MaxKey }
m30999| Thu Jun 14 01:41:07 [conn] about to initiate autosplit: ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') } max: { _id: MaxKey } dataWritten: 188760 splitThreshold: 943718
m30999| Thu Jun 14 01:41:07 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:41:08 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:41:08 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_0_inc
m30001| Thu Jun 14 01:41:08 [conn3] build index mrShard.tmp.mr.srcNonSharded_0_inc { 0: 1 }
m30001| Thu Jun 14 01:41:08 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:08 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_0
m30001| Thu Jun 14 01:41:08 [conn3] build index mrShard.tmp.mr.srcNonSharded_0 { _id: 1 }
m30001| Thu Jun 14 01:41:08 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:08 [FileAllocator] done allocating datafile /data/db/mrShard1/mrShard.2, size: 64MB, took 1.282 secs
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.mrBasic
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_0
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_0
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_0_inc
m30001| Thu Jun 14 01:41:09 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "mrBasic" } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:2188 r:1129059 w:2473207 reslen:131 1343ms
{
"result" : "mrBasic",
"timeMillis" : 1343,
"counts" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
},
"ok" : 1,
}
m30999| Thu Jun 14 01:41:09 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_1_inc
m30001| Thu Jun 14 01:41:09 [conn3] build index mrShard.tmp.mr.srcNonSharded_1_inc { 0: 1 }
m30001| Thu Jun 14 01:41:09 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:09 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_1
m30001| Thu Jun 14 01:41:09 [conn3] build index mrShard.tmp.mr.srcNonSharded_1 { _id: 1 }
m30001| Thu Jun 14 01:41:09 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.mrReplace
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_1
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_1
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_1_inc
m30001| Thu Jun 14 01:41:10 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: { replace: "mrReplace" } } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:4019 r:2244993 w:2487776 reslen:133 1303ms
{
"result" : "mrReplace",
"timeMillis" : 1303,
"counts" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
},
"ok" : 1,
}
m30999| Thu Jun 14 01:41:10 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_2_inc
m30001| Thu Jun 14 01:41:10 [conn3] build index mrShard.tmp.mr.srcNonSharded_2_inc { 0: 1 }
m30001| Thu Jun 14 01:41:10 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:10 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_2
m30001| Thu Jun 14 01:41:10 [conn3] build index mrShard.tmp.mr.srcNonSharded_2 { _id: 1 }
m30001| Thu Jun 14 01:41:10 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:11 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:41:11 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:11 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97977607081b222f40293" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97971607081b222f40292" } }
m30999| Thu Jun 14 01:41:11 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97977607081b222f40293
m30999| Thu Jun 14 01:41:11 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:41:11 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:11 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:11 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:11 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:11 [Balancer] shard0000
m30999| Thu Jun 14 01:41:11 [Balancer] { _id: "mrShard.srcSharded-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:11 [Balancer] shard0001
m30999| Thu Jun 14 01:41:11 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:11 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796f56cc70fc67ed99f9')", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:11 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9797256cc70fc67ee03ae')", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, max: { _id: MaxKey }, shard: "shard0001" }
m30001| Thu Jun 14 01:41:11 [conn2] received moveChunk request: { moveChunk: "mrShard.srcSharded", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, maxChunkSizeBytes: 1048576, shardId: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:11 [conn2] created new distributed lock for mrShard.srcSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:11 [Balancer] ----
m30999| Thu Jun 14 01:41:11 [Balancer] collection : mrShard.srcSharded
m30999| Thu Jun 14 01:41:11 [Balancer] donor : 3 chunks on shard0001
m30999| Thu Jun 14 01:41:11 [Balancer] receiver : 1 chunks on shard0000
m30999| Thu Jun 14 01:41:11 [Balancer] chose [shard0001] to [shard0000] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:11 [Balancer] moving chunk ns: mrShard.srcSharded moving ( ns:mrShard.srcSharded at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') } max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:41:11 [conn2] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' acquired, ts : 4fd9797778d0a44169ffafc9
m30001| Thu Jun 14 01:41:11 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:11-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48675", time: new Date(1339652471112), what: "moveChunk.start", ns: "mrShard.srcSharded", details: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:11 [conn2] moveChunk request accepted at version 2|3||4fd97967607081b222f40291
m30001| Thu Jun 14 01:41:11 [conn2] moveChunk number of documents: 12896
m30000| Thu Jun 14 01:41:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'mrShard.srcSharded' { _id: ObjectId('4fd9796d56cc70fc67ed6799') } -> { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }
m30001| Thu Jun 14 01:41:12 [conn2] moveChunk data transfer progress: { active: true, ns: "mrShard.srcSharded", from: "localhost:30001", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 12896, clonedBytes: 567424, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:41:12 [conn2] moveChunk setting version to: 3|0||4fd97967607081b222f40291
m30000| Thu Jun 14 01:41:12 [migrateThread] migrate commit succeeded flushing to secondaries for 'mrShard.srcSharded' { _id: ObjectId('4fd9796d56cc70fc67ed6799') } -> { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }
m30000| Thu Jun 14 01:41:12 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:12-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652472140), what: "moveChunk.to", ns: "mrShard.srcSharded", details: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 527, step4 of 5: 0, step5 of 5: 479 } }
m30001| Thu Jun 14 01:41:12 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "mrShard.srcSharded", from: "localhost:30001", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 12896, clonedBytes: 567424, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:41:12 [conn2] moveChunk updating self version to: 3|1||4fd97967607081b222f40291 through { _id: ObjectId('4fd9796f56cc70fc67ed99f9') } -> { _id: ObjectId('4fd9797256cc70fc67ee03ae') } for collection 'mrShard.srcSharded'
m30001| Thu Jun 14 01:41:12 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:12-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48675", time: new Date(1339652472149), what: "moveChunk.commit", ns: "mrShard.srcSharded", details: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:12 [conn2] doing delete inline
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.mrMerge
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_2
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_2
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_2_inc
m30001| Thu Jun 14 01:41:12 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: { merge: "mrMerge" } } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:5952 r:3457493 w:2505046 reslen:131 1462ms
{
"result" : "mrMerge",
"timeMillis" : 1461,
"counts" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
},
"ok" : 1,
}
m30999| Thu Jun 14 01:41:12 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_3_inc
m30001| Thu Jun 14 01:41:12 [conn3] build index mrShard.tmp.mr.srcNonSharded_3_inc { 0: 1 }
m30001| Thu Jun 14 01:41:12 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:12 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_3
m30001| Thu Jun 14 01:41:12 [conn3] build index mrShard.tmp.mr.srcNonSharded_3 { _id: 1 }
m30001| Thu Jun 14 01:41:12 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:13 [conn3] CMD: drop mrShard.mrReduce
m30001| Thu Jun 14 01:41:13 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_3
m30001| Thu Jun 14 01:41:13 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_3
m30001| Thu Jun 14 01:41:13 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_3_inc
m30001| Thu Jun 14 01:41:13 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: { reduce: "mrReduce" } } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:8022 r:4517574 w:2523348 reslen:132 1309ms
{
"result" : "mrReduce",
"timeMillis" : 1308,
"counts" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
},
"ok" : 1,
}
m30999| Thu Jun 14 01:41:13 [conn] simple MR, just passthrough
m30001| Thu Jun 14 01:41:14 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: { inline: "mrInline" } } ntoreturn:1 keyUpdates:0 numYields: 512 locks(micros) W:8022 r:5624734 w:2523348 reslen:19471 1296ms
m30001| Thu Jun 14 01:41:14 [conn2] moveChunk deleted: 12896
m30001| Thu Jun 14 01:41:14 [conn2] distributed lock 'mrShard.srcSharded/domU-12-31-39-01-70-B4:30001:1339652461:305425140' unlocked.
m30001| Thu Jun 14 01:41:14 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:14-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48675", time: new Date(1339652474974), what: "moveChunk.from", ns: "mrShard.srcSharded", details: { min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 20, step4 of 6: 1006, step5 of 6: 10, step6 of 6: 2823 } }
m30001| Thu Jun 14 01:41:14 [conn2] command admin.$cmd command: { moveChunk: "mrShard.srcSharded", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, maxChunkSizeBytes: 1048576, shardId: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 W:72 r:76789 w:1724083 reslen:37 3863ms
m30999| Thu Jun 14 01:41:14 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:41:14 [Balancer] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 7 version: 3|1||4fd97967607081b222f40291 based on: 2|3||4fd97967607081b222f40291
m30999| Thu Jun 14 01:41:14 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:41:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
{
"results" : [
{
"_id" : 0,
"value" : 100
},
{
"_id" : 1,
"value" : 100
},
{
"_id" : 2,
"value" : 100
},
{
"_id" : 3,
"value" : 100
},
{
"_id" : 4,
"value" : 100
},
{
"_id" : 5,
"value" : 100
},
{
"_id" : 6,
"value" : 100
},
{
"_id" : 7,
"value" : 100
},
{
"_id" : 8,
"value" : 100
},
{
"_id" : 9,
"value" : 100
},
{
"_id" : 10,
"value" : 100
},
{
"_id" : 11,
"value" : 100
},
{
"_id" : 12,
"value" : 100
},
{
"_id" : 13,
"value" : 100
},
{
"_id" : 14,
"value" : 100
},
{
"_id" : 15,
"value" : 100
},
{
"_id" : 16,
"value" : 100
},
{
"_id" : 17,
"value" : 100
},
{
"_id" : 18,
"value" : 100
},
{
"_id" : 19,
"value" : 100
},
{
"_id" : 20,
"value" : 100
},
{
"_id" : 21,
"value" : 100
},
{
"_id" : 22,
"value" : 100
},
{
"_id" : 23,
"value" : 100
},
{
"_id" : 24,
"value" : 100
},
{
"_id" : 25,
"value" : 100
},
{
"_id" : 26,
"value" : 100
},
{
"_id" : 27,
"value" : 100
},
{
"_id" : 28,
"value" : 100
},
{
"_id" : 29,
"value" : 100
},
{
"_id" : 30,
"value" : 100
},
{
"_id" : 31,
"value" : 100
},
{
"_id" : 32,
"value" : 100
},
{
"_id" : 33,
"value" : 100
},
{
"_id" : 34,
"value" : 100
},
{
"_id" : 35,
"value" : 100
},
{
"_id" : 36,
"value" : 100
},
{
"_id" : 37,
"value" : 100
},
{
"_id" : 38,
"value" : 100
},
{
"_id" : 39,
"value" : 100
},
{
"_id" : 40,
"value" : 100
},
{
"_id" : 41,
"value" : 100
},
{
"_id" : 42,
"value" : 100
},
{
"_id" : 43,
"value" : 100
},
{
"_id" : 44,
"value" : 100
},
{
"_id" : 45,
"value" : 100
},
{
"_id" : 46,
"value" : 100
},
{
"_id" : 47,
"value" : 100
},
{
"_id" : 48,
"value" : 100
},
{
"_id" : 49,
"value" : 100
},
{
"_id" : 50,
"value" : 100
},
{
"_id" : 51,
"value" : 100
},
{
"_id" : 52,
"value" : 100
},
{
"_id" : 53,
"value" : 100
},
{
"_id" : 54,
"value" : 100
},
{
"_id" : 55,
"value" : 100
},
{
"_id" : 56,
"value" : 100
},
{
"_id" : 57,
"value" : 100
},
{
"_id" : 58,
"value" : 100
},
{
"_id" : 59,
"value" : 100
},
{
"_id" : 60,
"value" : 100
},
{
"_id" : 61,
"value" : 100
},
{
"_id" : 62,
"value" : 100
},
{
"_id" : 63,
"value" : 100
},
{
"_id" : 64,
"value" : 100
},
{
"_id" : 65,
"value" : 100
},
{
"_id" : 66,
"value" : 100
},
{
"_id" : 67,
"value" : 100
},
{
"_id" : 68,
"value" : 100
},
{
"_id" : 69,
"value" : 100
},
{
"_id" : 70,
"value" : 100
},
{
"_id" : 71,
"value" : 100
},
{
"_id" : 72,
"value" : 100
},
{
"_id" : 73,
"value" : 100
},
{
"_id" : 74,
"value" : 100
},
{
"_id" : 75,
"value" : 100
},
{
"_id" : 76,
"value" : 100
},
{
"_id" : 77,
"value" : 100
},
{
"_id" : 78,
"value" : 100
},
{
"_id" : 79,
"value" : 100
},
{
"_id" : 80,
"value" : 100
},
{
"_id" : 81,
"value" : 100
},
{
"_id" : 82,
"value" : 100
},
{
"_id" : 83,
"value" : 100
},
{
"_id" : 84,
"value" : 100
},
{
"_id" : 85,
"value" : 100
},
{
"_id" : 86,
"value" : 100
},
{
"_id" : 87,
"value" : 100
},
{
"_id" : 88,
"value" : 100
},
{
"_id" : 89,
"value" : 100
},
{
"_id" : 90,
"value" : 100
},
{
"_id" : 91,
"value" : 100
},
{
"_id" : 92,
"value" : 100
},
{
"_id" : 93,
"value" : 100
},
{
"_id" : 94,
"value" : 100
},
{
"_id" : 95,
"value" : 100
},
{
"_id" : 96,
"value" : 100
},
{
"_id" : 97,
"value" : 100
},
{
"_id" : 98,
"value" : 100
},
{
"_id" : 99,
"value" : 100
},
{
"_id" : 100,
"value" : 100
},
{
"_id" : 101,
"value" : 100
},
{
"_id" : 102,
"value" : 100
},
{
"_id" : 103,
"value" : 100
},
{
"_id" : 104,
"value" : 100
},
{
"_id" : 105,
"value" : 100
},
{
"_id" : 106,
"value" : 100
},
{
"_id" : 107,
"value" : 100
},
{
"_id" : 108,
"value" : 100
},
{
"_id" : 109,
"value" : 100
},
{
"_id" : 110,
"value" : 100
},
{
"_id" : 111,
"value" : 100
},
{
"_id" : 112,
"value" : 100
},
{
"_id" : 113,
"value" : 100
},
{
"_id" : 114,
"value" : 100
},
{
"_id" : 115,
"value" : 100
},
{
"_id" : 116,
"value" : 100
},
{
"_id" : 117,
"value" : 100
},
{
"_id" : 118,
"value" : 100
},
{
"_id" : 119,
"value" : 100
},
{
"_id" : 120,
"value" : 100
},
{
"_id" : 121,
"value" : 100
},
{
"_id" : 122,
"value" : 100
},
{
"_id" : 123,
"value" : 100
},
{
"_id" : 124,
"value" : 100
},
{
"_id" : 125,
"value" : 100
},
{
"_id" : 126,
"value" : 100
},
{
"_id" : 127,
"value" : 100
},
{
"_id" : 128,
"value" : 100
},
{
"_id" : 129,
"value" : 100
},
{
"_id" : 130,
"value" : 100
},
{
"_id" : 131,
"value" : 100
},
{
"_id" : 132,
"value" : 100
},
{
"_id" : 133,
"value" : 100
},
{
"_id" : 134,
"value" : 100
},
{
"_id" : 135,
"value" : 100
},
{
"_id" : 136,
"value" : 100
},
{
"_id" : 137,
"value" : 100
},
{
"_id" : 138,
"value" : 100
},
{
"_id" : 139,
"value" : 100
},
{
"_id" : 140,
"value" : 100
},
{
"_id" : 141,
"value" : 100
},
{
"_id" : 142,
"value" : 100
},
{
"_id" : 143,
"value" : 100
},
{
"_id" : 144,
"value" : 100
},
{
"_id" : 145,
"value" : 100
},
{
"_id" : 146,
"value" : 100
},
{
"_id" : 147,
"value" : 100
},
{
"_id" : 148,
"value" : 100
},
{
"_id" : 149,
"value" : 100
},
{
"_id" : 150,
"value" : 100
},
{
"_id" : 151,
"value" : 100
},
{
"_id" : 152,
"value" : 100
},
{
"_id" : 153,
"value" : 100
},
{
"_id" : 154,
"value" : 100
},
{
"_id" : 155,
"value" : 100
},
{
"_id" : 156,
"value" : 100
},
{
"_id" : 157,
"value" : 100
},
{
"_id" : 158,
"value" : 100
},
{
"_id" : 159,
"value" : 100
},
{
"_id" : 160,
"value" : 100
},
{
"_id" : 161,
"value" : 100
},
{
"_id" : 162,
"value" : 100
},
{
"_id" : 163,
"value" : 100
},
{
"_id" : 164,
"value" : 100
},
{
"_id" : 165,
"value" : 100
},
{
"_id" : 166,
"value" : 100
},
{
"_id" : 167,
"value" : 100
},
{
"_id" : 168,
"value" : 100
},
{
"_id" : 169,
"value" : 100
},
{
"_id" : 170,
"value" : 100
},
{
"_id" : 171,
"value" : 100
},
{
"_id" : 172,
"value" : 100
},
{
"_id" : 173,
"value" : 100
},
{
"_id" : 174,
"value" : 100
},
{
"_id" : 175,
"value" : 100
},
{
"_id" : 176,
"value" : 100
},
{
"_id" : 177,
"value" : 100
},
{
"_id" : 178,
"value" : 100
},
{
"_id" : 179,
"value" : 100
},
{
"_id" : 180,
"value" : 100
},
{
"_id" : 181,
"value" : 100
},
{
"_id" : 182,
"value" : 100
},
{
"_id" : 183,
"value" : 100
},
{
"_id" : 184,
"value" : 100
},
{
"_id" : 185,
"value" : 100
},
{
"_id" : 186,
"value" : 100
},
{
"_id" : 187,
"value" : 100
},
{
"_id" : 188,
"value" : 100
},
{
"_id" : 189,
"value" : 100
},
{
"_id" : 190,
"value" : 100
},
{
"_id" : 191,
"value" : 100
},
{
"_id" : 192,
"value" : 100
},
{
"_id" : 193,
"value" : 100
},
{
"_id" : 194,
"value" : 100
},
{
"_id" : 195,
"value" : 100
},
{
"_id" : 196,
"value" : 100
},
{
"_id" : 197,
"value" : 100
},
{
"_id" : 198,
"value" : 100
},
{
"_id" : 199,
"value" : 100
},
{
"_id" : 200,
"value" : 100
},
{
"_id" : 201,
"value" : 100
},
{
"_id" : 202,
"value" : 100
},
{
"_id" : 203,
"value" : 100
},
{
"_id" : 204,
"value" : 100
},
{
"_id" : 205,
"value" : 100
},
{
"_id" : 206,
"value" : 100
},
{
"_id" : 207,
"value" : 100
},
{
"_id" : 208,
"value" : 100
},
{
"_id" : 209,
"value" : 100
},
{
"_id" : 210,
"value" : 100
},
{
"_id" : 211,
"value" : 100
},
{
"_id" : 212,
"value" : 100
},
{
"_id" : 213,
"value" : 100
},
{
"_id" : 214,
"value" : 100
},
{
"_id" : 215,
"value" : 100
},
{
"_id" : 216,
"value" : 100
},
{
"_id" : 217,
"value" : 100
},
{
"_id" : 218,
"value" : 100
},
{
"_id" : 219,
"value" : 100
},
{
"_id" : 220,
"value" : 100
},
{
"_id" : 221,
"value" : 100
},
{
"_id" : 222,
"value" : 100
},
{
"_id" : 223,
"value" : 100
},
{
"_id" : 224,
"value" : 100
},
{
"_id" : 225,
"value" : 100
},
{
"_id" : 226,
"value" : 100
},
{
"_id" : 227,
"value" : 100
},
{
"_id" : 228,
"value" : 100
},
{
"_id" : 229,
"value" : 100
},
{
"_id" : 230,
"value" : 100
},
{
"_id" : 231,
"value" : 100
},
{
"_id" : 232,
"value" : 100
},
{
"_id" : 233,
"value" : 100
},
{
"_id" : 234,
"value" : 100
},
{
"_id" : 235,
"value" : 100
},
{
"_id" : 236,
"value" : 100
},
{
"_id" : 237,
"value" : 100
},
{
"_id" : 238,
"value" : 100
},
{
"_id" : 239,
"value" : 100
},
{
"_id" : 240,
"value" : 100
},
{
"_id" : 241,
"value" : 100
},
{
"_id" : 242,
"value" : 100
},
{
"_id" : 243,
"value" : 100
},
{
"_id" : 244,
"value" : 100
},
{
"_id" : 245,
"value" : 100
},
{
"_id" : 246,
"value" : 100
},
{
"_id" : 247,
"value" : 100
},
{
"_id" : 248,
"value" : 100
},
{
"_id" : 249,
"value" : 100
},
{
"_id" : 250,
"value" : 100
},
{
"_id" : 251,
"value" : 100
},
{
"_id" : 252,
"value" : 100
},
{
"_id" : 253,
"value" : 100
},
{
"_id" : 254,
"value" : 100
},
{
"_id" : 255,
"value" : 100
},
{
"_id" : 256,
"value" : 100
},
{
"_id" : 257,
"value" : 100
},
{
"_id" : 258,
"value" : 100
},
{
"_id" : 259,
"value" : 100
},
{
"_id" : 260,
"value" : 100
},
{
"_id" : 261,
"value" : 100
},
{
"_id" : 262,
"value" : 100
},
{
"_id" : 263,
"value" : 100
},
{
"_id" : 264,
"value" : 100
},
{
"_id" : 265,
"value" : 100
},
{
"_id" : 266,
"value" : 100
},
{
"_id" : 267,
"value" : 100
},
{
"_id" : 268,
"value" : 100
},
{
"_id" : 269,
"value" : 100
},
{
"_id" : 270,
"value" : 100
},
{
"_id" : 271,
"value" : 100
},
{
"_id" : 272,
"value" : 100
},
{
"_id" : 273,
"value" : 100
},
{
"_id" : 274,
"value" : 100
},
{
"_id" : 275,
"value" : 100
},
{
"_id" : 276,
"value" : 100
},
{
"_id" : 277,
"value" : 100
},
{
"_id" : 278,
"value" : 100
},
{
"_id" : 279,
"value" : 100
},
{
"_id" : 280,
"value" : 100
},
{
"_id" : 281,
"value" : 100
},
{
"_id" : 282,
"value" : 100
},
{
"_id" : 283,
"value" : 100
},
{
"_id" : 284,
"value" : 100
},
{
"_id" : 285,
"value" : 100
},
{
"_id" : 286,
"value" : 100
},
{
"_id" : 287,
"value" : 100
},
{
"_id" : 288,
"value" : 100
},
{
"_id" : 289,
"value" : 100
},
{
"_id" : 290,
"value" : 100
},
{
"_id" : 291,
"value" : 100
},
{
"_id" : 292,
"value" : 100
},
{
"_id" : 293,
"value" : 100
},
{
"_id" : 294,
"value" : 100
},
{
"_id" : 295,
"value" : 100
},
{
"_id" : 296,
"value" : 100
},
{
"_id" : 297,
"value" : 100
},
{
"_id" : 298,
"value" : 100
},
{
"_id" : 299,
"value" : 100
},
{
"_id" : 300,
"value" : 100
},
{
"_id" : 301,
"value" : 100
},
{
"_id" : 302,
"value" : 100
},
{
"_id" : 303,
"value" : 100
},
{
"_id" : 304,
"value" : 100
},
{
"_id" : 305,
"value" : 100
},
{
"_id" : 306,
"value" : 100
},
{
"_id" : 307,
"value" : 100
},
{
"_id" : 308,
"value" : 100
},
{
"_id" : 309,
"value" : 100
},
{
"_id" : 310,
"value" : 100
},
{
"_id" : 311,
"value" : 100
},
{
"_id" : 312,
"value" : 100
},
{
"_id" : 313,
"value" : 100
},
{
"_id" : 314,
"value" : 100
},
{
"_id" : 315,
"value" : 100
},
{
"_id" : 316,
"value" : 100
},
{
"_id" : 317,
"value" : 100
},
{
"_id" : 318,
"value" : 100
},
{
"_id" : 319,
"value" : 100
},
{
"_id" : 320,
"value" : 100
},
{
"_id" : 321,
"value" : 100
},
{
"_id" : 322,
"value" : 100
},
{
"_id" : 323,
"value" : 100
},
{
"_id" : 324,
"value" : 100
},
{
"_id" : 325,
"value" : 100
},
{
"_id" : 326,
"value" : 100
},
{
"_id" : 327,
"value" : 100
},
{
"_id" : 328,
"value" : 100
},
{
"_id" : 329,
"value" : 100
},
{
"_id" : 330,
"value" : 100
},
{
"_id" : 331,
"value" : 100
},
{
"_id" : 332,
"value" : 100
},
{
"_id" : 333,
"value" : 100
},
{
"_id" : 334,
"value" : 100
},
{
"_id" : 335,
"value" : 100
},
{
"_id" : 336,
"value" : 100
},
{
"_id" : 337,
"value" : 100
},
{
"_id" : 338,
"value" : 100
},
{
"_id" : 339,
"value" : 100
},
{
"_id" : 340,
"value" : 100
},
{
"_id" : 341,
"value" : 100
},
{
"_id" : 342,
"value" : 100
},
{
"_id" : 343,
"value" : 100
},
{
"_id" : 344,
"value" : 100
},
{
"_id" : 345,
"value" : 100
},
{
"_id" : 346,
"value" : 100
},
{
"_id" : 347,
"value" : 100
},
{
"_id" : 348,
"value" : 100
},
{
"_id" : 349,
"value" : 100
},
{
"_id" : 350,
"value" : 100
},
{
"_id" : 351,
"value" : 100
},
{
"_id" : 352,
"value" : 100
},
{
"_id" : 353,
"value" : 100
},
{
"_id" : 354,
"value" : 100
},
{
"_id" : 355,
"value" : 100
},
{
"_id" : 356,
"value" : 100
},
{
"_id" : 357,
"value" : 100
},
{
"_id" : 358,
"value" : 100
},
{
"_id" : 359,
"value" : 100
},
{
"_id" : 360,
"value" : 100
},
{
"_id" : 361,
"value" : 100
},
{
"_id" : 362,
"value" : 100
},
{
"_id" : 363,
"value" : 100
},
{
"_id" : 364,
"value" : 100
},
{
"_id" : 365,
"value" : 100
},
{
"_id" : 366,
"value" : 100
},
{
"_id" : 367,
"value" : 100
},
{
"_id" : 368,
"value" : 100
},
{
"_id" : 369,
"value" : 100
},
{
"_id" : 370,
"value" : 100
},
{
"_id" : 371,
"value" : 100
},
{
"_id" : 372,
"value" : 100
},
{
"_id" : 373,
"value" : 100
},
{
"_id" : 374,
"value" : 100
},
{
"_id" : 375,
"value" : 100
},
{
"_id" : 376,
"value" : 100
},
{
"_id" : 377,
"value" : 100
},
{
"_id" : 378,
"value" : 100
},
{
"_id" : 379,
"value" : 100
},
{
"_id" : 380,
"value" : 100
},
{
"_id" : 381,
"value" : 100
},
{
"_id" : 382,
"value" : 100
},
{
"_id" : 383,
"value" : 100
},
{
"_id" : 384,
"value" : 100
},
{
"_id" : 385,
"value" : 100
},
{
"_id" : 386,
"value" : 100
},
{
"_id" : 387,
"value" : 100
},
{
"_id" : 388,
"value" : 100
},
{
"_id" : 389,
"value" : 100
},
{
"_id" : 390,
"value" : 100
},
{
"_id" : 391,
"value" : 100
},
{
"_id" : 392,
"value" : 100
},
{
"_id" : 393,
"value" : 100
},
{
"_id" : 394,
"value" : 100
},
{
"_id" : 395,
"value" : 100
},
{
"_id" : 396,
"value" : 100
},
{
"_id" : 397,
"value" : 100
},
{
"_id" : 398,
"value" : 100
},
{
"_id" : 399,
"value" : 100
},
{
"_id" : 400,
"value" : 100
},
{
"_id" : 401,
"value" : 100
},
{
"_id" : 402,
"value" : 100
},
{
"_id" : 403,
"value" : 100
},
{
"_id" : 404,
"value" : 100
},
{
"_id" : 405,
"value" : 100
},
{
"_id" : 406,
"value" : 100
},
{
"_id" : 407,
"value" : 100
},
{
"_id" : 408,
"value" : 100
},
{
"_id" : 409,
"value" : 100
},
{
"_id" : 410,
"value" : 100
},
{
"_id" : 411,
"value" : 100
},
{
"_id" : 412,
"value" : 100
},
{
"_id" : 413,
"value" : 100
},
{
"_id" : 414,
"value" : 100
},
{
"_id" : 415,
"value" : 100
},
{
"_id" : 416,
"value" : 100
},
{
"_id" : 417,
"value" : 100
},
{
"_id" : 418,
"value" : 100
},
{
"_id" : 419,
"value" : 100
},
{
"_id" : 420,
"value" : 100
},
{
"_id" : 421,
"value" : 100
},
{
"_id" : 422,
"value" : 100
},
{
"_id" : 423,
"value" : 100
},
{
"_id" : 424,
"value" : 100
},
{
"_id" : 425,
"value" : 100
},
{
"_id" : 426,
"value" : 100
},
{
"_id" : 427,
"value" : 100
},
{
"_id" : 428,
"value" : 100
},
{
"_id" : 429,
"value" : 100
},
{
"_id" : 430,
"value" : 100
},
{
"_id" : 431,
"value" : 100
},
{
"_id" : 432,
"value" : 100
},
{
"_id" : 433,
"value" : 100
},
{
"_id" : 434,
"value" : 100
},
{
"_id" : 435,
"value" : 100
},
{
"_id" : 436,
"value" : 100
},
{
"_id" : 437,
"value" : 100
},
{
"_id" : 438,
"value" : 100
},
{
"_id" : 439,
"value" : 100
},
{
"_id" : 440,
"value" : 100
},
{
"_id" : 441,
"value" : 100
},
{
"_id" : 442,
"value" : 100
},
{
"_id" : 443,
"value" : 100
},
{
"_id" : 444,
"value" : 100
},
{
"_id" : 445,
"value" : 100
},
{
"_id" : 446,
"value" : 100
},
{
"_id" : 447,
"value" : 100
},
{
"_id" : 448,
"value" : 100
},
{
"_id" : 449,
"value" : 100
},
{
"_id" : 450,
"value" : 100
},
{
"_id" : 451,
"value" : 100
},
{
"_id" : 452,
"value" : 100
},
{
"_id" : 453,
"value" : 100
},
{
"_id" : 454,
"value" : 100
},
{
"_id" : 455,
"value" : 100
},
{
"_id" : 456,
"value" : 100
},
{
"_id" : 457,
"value" : 100
},
{
"_id" : 458,
"value" : 100
},
{
"_id" : 459,
"value" : 100
},
{
"_id" : 460,
"value" : 100
},
{
"_id" : 461,
"value" : 100
},
{
"_id" : 462,
"value" : 100
},
{
"_id" : 463,
"value" : 100
},
{
"_id" : 464,
"value" : 100
},
{
"_id" : 465,
"value" : 100
},
{
"_id" : 466,
"value" : 100
},
{
"_id" : 467,
"value" : 100
},
{
"_id" : 468,
"value" : 100
},
{
"_id" : 469,
"value" : 100
},
{
"_id" : 470,
"value" : 100
},
{
"_id" : 471,
"value" : 100
},
{
"_id" : 472,
"value" : 100
},
{
"_id" : 473,
"value" : 100
},
{
"_id" : 474,
"value" : 100
},
{
"_id" : 475,
"value" : 100
},
{
"_id" : 476,
"value" : 100
},
{
"_id" : 477,
"value" : 100
},
{
"_id" : 478,
"value" : 100
},
{
"_id" : 479,
"value" : 100
},
{
"_id" : 480,
"value" : 100
},
{
"_id" : 481,
"value" : 100
},
{
"_id" : 482,
"value" : 100
},
{
"_id" : 483,
"value" : 100
},
{
"_id" : 484,
"value" : 100
},
{
"_id" : 485,
"value" : 100
},
{
"_id" : 486,
"value" : 100
},
{
"_id" : 487,
"value" : 100
},
{
"_id" : 488,
"value" : 100
},
{
"_id" : 489,
"value" : 100
},
{
"_id" : 490,
"value" : 100
},
{
"_id" : 491,
"value" : 100
},
{
"_id" : 492,
"value" : 100
},
{
"_id" : 493,
"value" : 100
},
{
"_id" : 494,
"value" : 100
},
{
"_id" : 495,
"value" : 100
},
{
"_id" : 496,
"value" : 100
},
{
"_id" : 497,
"value" : 100
},
{
"_id" : 498,
"value" : 100
},
{
"_id" : 499,
"value" : 100
},
{
"_id" : 500,
"value" : 100
},
{
"_id" : 501,
"value" : 100
},
{
"_id" : 502,
"value" : 100
},
{
"_id" : 503,
"value" : 100
},
{
"_id" : 504,
"value" : 100
},
{
"_id" : 505,
"value" : 100
},
{
"_id" : 506,
"value" : 100
},
{
"_id" : 507,
"value" : 100
},
{
"_id" : 508,
"value" : 100
},
{
"_id" : 509,
"value" : 100
},
{
"_id" : 510,
"value" : 100
},
{
"_id" : 511,
"value" : 100
}
],
"timeMillis" : 1295,
"counts" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
},
"ok" : 1,
}
m30999| Thu Jun 14 01:41:14 [conn] couldn't find database [mrShardOtherDB] in config db
m30999| Thu Jun 14 01:41:14 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 64 writeLock: 0
m30999| Thu Jun 14 01:41:14 [conn] put [mrShardOtherDB] on: shard0000:localhost:30000
m30001| Thu Jun 14 01:41:14 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_4_inc
m30001| Thu Jun 14 01:41:14 [conn3] build index mrShard.tmp.mr.srcNonSharded_4_inc { 0: 1 }
m30001| Thu Jun 14 01:41:15 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:15 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_4
m30001| Thu Jun 14 01:41:15 [conn3] build index mrShard.tmp.mr.srcNonSharded_4 { _id: 1 }
m30001| Thu Jun 14 01:41:15 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:16 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652474_5
m30001| Thu Jun 14 01:41:16 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_4
m30001| Thu Jun 14 01:41:16 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_4
m30001| Thu Jun 14 01:41:16 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_4_inc
m30001| Thu Jun 14 01:41:16 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652474_5", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:9840 r:6736110 w:2537967 reslen:158 1299ms
m30999| Thu Jun 14 01:41:16 [conn] MR with single shard output, NS=mrShardOtherDB.mrReplace primary=shard0000:localhost:30000
m30000| Thu Jun 14 01:41:16 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_0
m30000| Thu Jun 14 01:41:16 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShardOtherDB.ns, filling with zeroes...
m30000| Thu Jun 14 01:41:16 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShardOtherDB.ns, size: 16MB, took 0.399 secs
m30000| Thu Jun 14 01:41:16 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShardOtherDB.0, filling with zeroes...
m30000| Thu Jun 14 01:41:17 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShardOtherDB.0, size: 16MB, took 0.306 secs
m30000| Thu Jun 14 01:41:17 [FileAllocator] allocating new datafile /data/db/mrShard0/mrShardOtherDB.1, filling with zeroes...
m30000| Thu Jun 14 01:41:17 [conn7] build index mrShardOtherDB.tmp.mr.srcNonSharded_0 { _id: 1 }
m30000| Thu Jun 14 01:41:17 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:17 [initandlisten] connection accepted from 127.0.0.1:60112 #13 (13 connections now open)
m30000| Thu Jun 14 01:41:17 [initandlisten] connection accepted from 127.0.0.1:60113 #14 (14 connections now open)
m30001| Thu Jun 14 01:41:17 [initandlisten] connection accepted from 127.0.0.1:48689 #8 (8 connections now open)
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShardOtherDB.mrReplace
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_0
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_0
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_0
m30000| Thu Jun 14 01:41:17 [conn7] command mrShardOtherDB.$cmd command: { mapreduce.shardedfinish: { mapreduce: "srcNonSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: { replace: "mrReplace", db: "mrShardOtherDB" } }, inputDB: "mrShard", shardedOutputCollection: "tmp.mrs.srcNonSharded_1339652474_5", shards: { localhost:30001: { result: "tmp.mrs.srcNonSharded_1339652474_5", timeMillis: 1299, counts: { input: 51200, emit: 51200, reduce: 5120, output: 512 }, ok: 1.0 } }, shardCounts: { localhost:30001: { input: 51200, emit: 51200, reduce: 5120, output: 512 } }, counts: { emit: 51200, input: 51200, output: 512, reduce: 5120 } } ntoreturn:1 keyUpdates:0 locks(micros) W:1832 r:12 w:724305 reslen:238 774ms
{
"result" : {
"db" : "mrShardOtherDB",
"collection" : "mrReplace"
},
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 2080,
"timing" : {
"shardProcessing" : 1302,
"postProcessing" : 777
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30000" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:17 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652474_5
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion shard0000 localhost:30000 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.srcSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.srcSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion shard0000 localhost:30000 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion shard0001 localhost:30001 mrShard.srcSharded { setShardVersion: "mrShard.srcSharded", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('4fd97967607081b222f40291'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:17 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_5_inc
m30001| Thu Jun 14 01:41:17 [conn3] build index mrShard.tmp.mr.srcSharded_5_inc { 0: 1 }
m30001| Thu Jun 14 01:41:17 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:17 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_5
m30001| Thu Jun 14 01:41:17 [conn3] build index mrShard.tmp.mr.srcSharded_5 { _id: 1 }
m30001| Thu Jun 14 01:41:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:17 [conn7] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_1_inc
m30000| Thu Jun 14 01:41:17 [conn7] build index mrShard.tmp.mr.srcSharded_1_inc { 0: 1 }
m30000| Thu Jun 14 01:41:17 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_1
m30000| Thu Jun 14 01:41:17 [conn7] build index mrShard.tmp.mr.srcSharded_1 { _id: 1 }
m30000| Thu Jun 14 01:41:17 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652477_6
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_1
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_1
m30000| Thu Jun 14 01:41:17 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_1_inc
m30000| Thu Jun 14 01:41:17 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652477_6", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:3879 r:343039 w:741952 reslen:155 445ms
m30999| Thu Jun 14 01:41:17 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97967607081b222f40291'), ok: 1.0 }
m30000| Thu Jun 14 01:41:17 [FileAllocator] done allocating datafile /data/db/mrShard0/mrShardOtherDB.1, size: 32MB, took 0.645 secs
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652477_6
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_5
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_5
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_5_inc
m30001| Thu Jun 14 01:41:18 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652477_6", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:11884 r:7641337 w:2552393 reslen:155 1055ms
m30999| Thu Jun 14 01:41:18 [conn] MR with single shard output, NS= primary=shard0001:localhost:30001
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_6
m30001| Thu Jun 14 01:41:18 [conn3] build index mrShard.tmp.mr.srcSharded_6 { _id: 1 }
m30001| Thu Jun 14 01:41:18 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:18 [conn3] ChunkManager: time to load chunks for mrShard.srcSharded: 0ms sequenceNumber: 2 version: 3|1||4fd97967607081b222f40291 based on: (empty)
m30000| Thu Jun 14 01:41:18 [initandlisten] connection accepted from 127.0.0.1:60115 #15 (15 connections now open)
m30001| Thu Jun 14 01:41:18 [initandlisten] connection accepted from 127.0.0.1:48691 #9 (9 connections now open)
m30001| Thu Jun 14 01:41:18 [initandlisten] connection accepted from 127.0.0.1:48692 #10 (10 connections now open)
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.mrBasicInSharded
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_6
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_6
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_6
m30000| Thu Jun 14 01:41:18 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652477_6
m30001| Thu Jun 14 01:41:18 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652477_6
{
"result" : "mrBasicInSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1162,
"timing" : {
"shardProcessing" : 1131,
"postProcessing" : 31
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_7_inc
m30001| Thu Jun 14 01:41:18 [conn3] build index mrShard.tmp.mr.srcSharded_7_inc { 0: 1 }
m30001| Thu Jun 14 01:41:18 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:18 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_7
m30001| Thu Jun 14 01:41:18 [conn3] build index mrShard.tmp.mr.srcSharded_7 { _id: 1 }
m30001| Thu Jun 14 01:41:18 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_2_inc
m30000| Thu Jun 14 01:41:18 [conn7] build index mrShard.tmp.mr.srcSharded_2_inc { 0: 1 }
m30000| Thu Jun 14 01:41:18 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_2
m30000| Thu Jun 14 01:41:18 [conn7] build index mrShard.tmp.mr.srcSharded_2 { _id: 1 }
m30000| Thu Jun 14 01:41:18 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652478_7
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_2
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_2
m30000| Thu Jun 14 01:41:18 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_2_inc
m30000| Thu Jun 14 01:41:18 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652478_7", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:5720 r:630784 w:756644 reslen:155 356ms
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652478_7
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_7
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_7
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_7_inc
m30001| Thu Jun 14 01:41:19 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652478_7", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:15831 r:8566892 w:2577326 reslen:155 1075ms
m30999| Thu Jun 14 01:41:19 [conn] MR with single shard output, NS=mrShard.mrReplaceInSharded primary=shard0001:localhost:30001
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_8
m30001| Thu Jun 14 01:41:19 [conn3] build index mrShard.tmp.mr.srcSharded_8 { _id: 1 }
m30001| Thu Jun 14 01:41:19 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.mrReplaceInSharded
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_8
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_8
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_8
m30000| Thu Jun 14 01:41:19 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652478_7
m30001| Thu Jun 14 01:41:19 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652478_7
{
"result" : "mrReplaceInSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1105,
"timing" : {
"shardProcessing" : 1076,
"postProcessing" : 29
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_3_inc
m30000| Thu Jun 14 01:41:19 [conn7] build index mrShard.tmp.mr.srcSharded_3_inc { 0: 1 }
m30000| Thu Jun 14 01:41:19 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_3
m30000| Thu Jun 14 01:41:19 [conn7] build index mrShard.tmp.mr.srcSharded_3 { _id: 1 }
m30000| Thu Jun 14 01:41:19 [conn7] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_9_inc
m30001| Thu Jun 14 01:41:19 [conn3] build index mrShard.tmp.mr.srcSharded_9_inc { 0: 1 }
m30001| Thu Jun 14 01:41:19 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:19 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_9
m30001| Thu Jun 14 01:41:19 [conn3] build index mrShard.tmp.mr.srcSharded_9 { _id: 1 }
m30001| Thu Jun 14 01:41:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652479_8
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_3
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_3
m30000| Thu Jun 14 01:41:19 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_3_inc
m30000| Thu Jun 14 01:41:19 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652479_8", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:7564 r:963489 w:771091 reslen:155 398ms
m30999| Thu Jun 14 01:41:19 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:41:19 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:19 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd9797f607081b222f40294" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97977607081b222f40293" } }
m30999| Thu Jun 14 01:41:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd9797f607081b222f40294
m30999| Thu Jun 14 01:41:19 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:41:19 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:19 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:19 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:19 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:19 [Balancer] shard0000
m30999| Thu Jun 14 01:41:19 [Balancer] { _id: "mrShard.srcSharded-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:19 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:19 [Balancer] shard0001
m30999| Thu Jun 14 01:41:19 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796f56cc70fc67ed99f9')", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:19 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9797256cc70fc67ee03ae')", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:19 [Balancer] ----
m30999| Thu Jun 14 01:41:19 [Balancer] collection : mrShard.srcSharded
m30999| Thu Jun 14 01:41:19 [Balancer] donor : 2 chunks on shard0000
m30999| Thu Jun 14 01:41:19 [Balancer] receiver : 2 chunks on shard0000
m30999| Thu Jun 14 01:41:19 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:41:19 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:41:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652479_8
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_9
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_9
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_9_inc
m30001| Thu Jun 14 01:41:20 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652479_8", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:19713 r:9622191 w:2602075 reslen:155 1204ms
m30999| Thu Jun 14 01:41:20 [conn] MR with single shard output, NS=mrShard.mrMergeInSharded primary=shard0001:localhost:30001
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_10
m30001| Thu Jun 14 01:41:20 [conn3] build index mrShard.tmp.mr.srcSharded_10 { _id: 1 }
m30001| Thu Jun 14 01:41:20 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.mrMergeInSharded
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_10
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_10
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_10
m30000| Thu Jun 14 01:41:20 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652479_8
m30001| Thu Jun 14 01:41:20 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652479_8
{
"result" : "mrMergeInSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1414,
"timing" : {
"shardProcessing" : 1364,
"postProcessing" : 49
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:20 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_4_inc
m30000| Thu Jun 14 01:41:20 [conn7] build index mrShard.tmp.mr.srcSharded_4_inc { 0: 1 }
m30000| Thu Jun 14 01:41:20 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:20 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_4
m30000| Thu Jun 14 01:41:20 [conn7] build index mrShard.tmp.mr.srcSharded_4 { _id: 1 }
m30000| Thu Jun 14 01:41:20 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652480_9
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_4
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_4
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_4_inc
m30000| Thu Jun 14 01:41:21 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652480_9", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:9421 r:1244439 w:785721 reslen:155 347ms
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_11_inc
m30001| Thu Jun 14 01:41:20 [conn3] build index mrShard.tmp.mr.srcSharded_11_inc { 0: 1 }
m30001| Thu Jun 14 01:41:20 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:20 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_11
m30001| Thu Jun 14 01:41:20 [conn3] build index mrShard.tmp.mr.srcSharded_11 { _id: 1 }
m30001| Thu Jun 14 01:41:20 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652480_9
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_11
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_11
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_11_inc
m30001| Thu Jun 14 01:41:21 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652480_9", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:23616 r:10523911 w:2626953 reslen:155 1072ms
m30999| Thu Jun 14 01:41:21 [conn] MR with single shard output, NS=mrShard.mrReduceInSharded primary=shard0001:localhost:30001
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_12
m30001| Thu Jun 14 01:41:21 [conn3] build index mrShard.tmp.mr.srcSharded_12 { _id: 1 }
m30001| Thu Jun 14 01:41:21 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.mrReduceInSharded
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_12
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_12
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_12
m30000| Thu Jun 14 01:41:21 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652480_9
m30001| Thu Jun 14 01:41:21 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652480_9
{
"result" : "mrReduceInSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1101,
"timing" : {
"shardProcessing" : 1072,
"postProcessing" : 28
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_5_inc
m30000| Thu Jun 14 01:41:21 [conn7] build index mrShard.tmp.mr.srcSharded_5_inc { 0: 1 }
m30000| Thu Jun 14 01:41:21 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:21 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_5
m30000| Thu Jun 14 01:41:21 [conn7] build index mrShard.tmp.mr.srcSharded_5 { _id: 1 }
m30000| Thu Jun 14 01:41:21 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652481_10
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_5
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_5
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_5_inc
m30000| Thu Jun 14 01:41:22 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652481_10", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:11231 r:1580479 w:800412 reslen:156 403ms
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_13_inc
m30001| Thu Jun 14 01:41:21 [conn3] build index mrShard.tmp.mr.srcSharded_13_inc { 0: 1 }
m30001| Thu Jun 14 01:41:21 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:21 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_13
m30001| Thu Jun 14 01:41:21 [conn3] build index mrShard.tmp.mr.srcSharded_13 { _id: 1 }
m30001| Thu Jun 14 01:41:21 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652481_10
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_13
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_13
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_13_inc
m30001| Thu Jun 14 01:41:22 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652481_10", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:27450 r:11429792 w:2651919 reslen:156 1056ms
m30999| Thu Jun 14 01:41:22 [conn] MR with single shard output, NS=mrShard. primary=shard0001:localhost:30001
m30000| Thu Jun 14 01:41:22 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652481_10
m30001| Thu Jun 14 01:41:22 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652481_10
{
"results" : [
{
"_id" : 0,
"value" : 100
},
{
"_id" : 1,
"value" : 100
},
{
"_id" : 2,
"value" : 100
},
{
"_id" : 3,
"value" : 100
},
{
"_id" : 4,
"value" : 100
},
{
"_id" : 5,
"value" : 100
},
{
"_id" : 6,
"value" : 100
},
{
"_id" : 7,
"value" : 100
},
{
"_id" : 8,
"value" : 100
},
{
"_id" : 9,
"value" : 100
},
{
"_id" : 10,
"value" : 100
},
{
"_id" : 11,
"value" : 100
},
{
"_id" : 12,
"value" : 100
},
{
"_id" : 13,
"value" : 100
},
{
"_id" : 14,
"value" : 100
},
{
"_id" : 15,
"value" : 100
},
{
"_id" : 16,
"value" : 100
},
{
"_id" : 17,
"value" : 100
},
{
"_id" : 18,
"value" : 100
},
{
"_id" : 19,
"value" : 100
},
{
"_id" : 20,
"value" : 100
},
{
"_id" : 21,
"value" : 100
},
{
"_id" : 22,
"value" : 100
},
{
"_id" : 23,
"value" : 100
},
{
"_id" : 24,
"value" : 100
},
{
"_id" : 25,
"value" : 100
},
{
"_id" : 26,
"value" : 100
},
{
"_id" : 27,
"value" : 100
},
{
"_id" : 28,
"value" : 100
},
{
"_id" : 29,
"value" : 100
},
{
"_id" : 30,
"value" : 100
},
{
"_id" : 31,
"value" : 100
},
{
"_id" : 32,
"value" : 100
},
{
"_id" : 33,
"value" : 100
},
{
"_id" : 34,
"value" : 100
},
{
"_id" : 35,
"value" : 100
},
{
"_id" : 36,
"value" : 100
},
{
"_id" : 37,
"value" : 100
},
{
"_id" : 38,
"value" : 100
},
{
"_id" : 39,
"value" : 100
},
{
"_id" : 40,
"value" : 100
},
{
"_id" : 41,
"value" : 100
},
{
"_id" : 42,
"value" : 100
},
{
"_id" : 43,
"value" : 100
},
{
"_id" : 44,
"value" : 100
},
{
"_id" : 45,
"value" : 100
},
{
"_id" : 46,
"value" : 100
},
{
"_id" : 47,
"value" : 100
},
{
"_id" : 48,
"value" : 100
},
{
"_id" : 49,
"value" : 100
},
{
"_id" : 50,
"value" : 100
},
{
"_id" : 51,
"value" : 100
},
{
"_id" : 52,
"value" : 100
},
{
"_id" : 53,
"value" : 100
},
{
"_id" : 54,
"value" : 100
},
{
"_id" : 55,
"value" : 100
},
{
"_id" : 56,
"value" : 100
},
{
"_id" : 57,
"value" : 100
},
{
"_id" : 58,
"value" : 100
},
{
"_id" : 59,
"value" : 100
},
{
"_id" : 60,
"value" : 100
},
{
"_id" : 61,
"value" : 100
},
{
"_id" : 62,
"value" : 100
},
{
"_id" : 63,
"value" : 100
},
{
"_id" : 64,
"value" : 100
},
{
"_id" : 65,
"value" : 100
},
{
"_id" : 66,
"value" : 100
},
{
"_id" : 67,
"value" : 100
},
{
"_id" : 68,
"value" : 100
},
{
"_id" : 69,
"value" : 100
},
{
"_id" : 70,
"value" : 100
},
{
"_id" : 71,
"value" : 100
},
{
"_id" : 72,
"value" : 100
},
{
"_id" : 73,
"value" : 100
},
{
"_id" : 74,
"value" : 100
},
{
"_id" : 75,
"value" : 100
},
{
"_id" : 76,
"value" : 100
},
{
"_id" : 77,
"value" : 100
},
{
"_id" : 78,
"value" : 100
},
{
"_id" : 79,
"value" : 100
},
{
"_id" : 80,
"value" : 100
},
{
"_id" : 81,
"value" : 100
},
{
"_id" : 82,
"value" : 100
},
{
"_id" : 83,
"value" : 100
},
{
"_id" : 84,
"value" : 100
},
{
"_id" : 85,
"value" : 100
},
{
"_id" : 86,
"value" : 100
},
{
"_id" : 87,
"value" : 100
},
{
"_id" : 88,
"value" : 100
},
{
"_id" : 89,
"value" : 100
},
{
"_id" : 90,
"value" : 100
},
{
"_id" : 91,
"value" : 100
},
{
"_id" : 92,
"value" : 100
},
{
"_id" : 93,
"value" : 100
},
{
"_id" : 94,
"value" : 100
},
{
"_id" : 95,
"value" : 100
},
{
"_id" : 96,
"value" : 100
},
{
"_id" : 97,
"value" : 100
},
{
"_id" : 98,
"value" : 100
},
{
"_id" : 99,
"value" : 100
},
{
"_id" : 100,
"value" : 100
},
{
"_id" : 101,
"value" : 100
},
{
"_id" : 102,
"value" : 100
},
{
"_id" : 103,
"value" : 100
},
{
"_id" : 104,
"value" : 100
},
{
"_id" : 105,
"value" : 100
},
{
"_id" : 106,
"value" : 100
},
{
"_id" : 107,
"value" : 100
},
{
"_id" : 108,
"value" : 100
},
{
"_id" : 109,
"value" : 100
},
{
"_id" : 110,
"value" : 100
},
{
"_id" : 111,
"value" : 100
},
{
"_id" : 112,
"value" : 100
},
{
"_id" : 113,
"value" : 100
},
{
"_id" : 114,
"value" : 100
},
{
"_id" : 115,
"value" : 100
},
{
"_id" : 116,
"value" : 100
},
{
"_id" : 117,
"value" : 100
},
{
"_id" : 118,
"value" : 100
},
{
"_id" : 119,
"value" : 100
},
{
"_id" : 120,
"value" : 100
},
{
"_id" : 121,
"value" : 100
},
{
"_id" : 122,
"value" : 100
},
{
"_id" : 123,
"value" : 100
},
{
"_id" : 124,
"value" : 100
},
{
"_id" : 125,
"value" : 100
},
{
"_id" : 126,
"value" : 100
},
{
"_id" : 127,
"value" : 100
},
{
"_id" : 128,
"value" : 100
},
{
"_id" : 129,
"value" : 100
},
{
"_id" : 130,
"value" : 100
},
{
"_id" : 131,
"value" : 100
},
{
"_id" : 132,
"value" : 100
},
{
"_id" : 133,
"value" : 100
},
{
"_id" : 134,
"value" : 100
},
{
"_id" : 135,
"value" : 100
},
{
"_id" : 136,
"value" : 100
},
{
"_id" : 137,
"value" : 100
},
{
"_id" : 138,
"value" : 100
},
{
"_id" : 139,
"value" : 100
},
{
"_id" : 140,
"value" : 100
},
{
"_id" : 141,
"value" : 100
},
{
"_id" : 142,
"value" : 100
},
{
"_id" : 143,
"value" : 100
},
{
"_id" : 144,
"value" : 100
},
{
"_id" : 145,
"value" : 100
},
{
"_id" : 146,
"value" : 100
},
{
"_id" : 147,
"value" : 100
},
{
"_id" : 148,
"value" : 100
},
{
"_id" : 149,
"value" : 100
},
{
"_id" : 150,
"value" : 100
},
{
"_id" : 151,
"value" : 100
},
{
"_id" : 152,
"value" : 100
},
{
"_id" : 153,
"value" : 100
},
{
"_id" : 154,
"value" : 100
},
{
"_id" : 155,
"value" : 100
},
{
"_id" : 156,
"value" : 100
},
{
"_id" : 157,
"value" : 100
},
{
"_id" : 158,
"value" : 100
},
{
"_id" : 159,
"value" : 100
},
{
"_id" : 160,
"value" : 100
},
{
"_id" : 161,
"value" : 100
},
{
"_id" : 162,
"value" : 100
},
{
"_id" : 163,
"value" : 100
},
{
"_id" : 164,
"value" : 100
},
{
"_id" : 165,
"value" : 100
},
{
"_id" : 166,
"value" : 100
},
{
"_id" : 167,
"value" : 100
},
{
"_id" : 168,
"value" : 100
},
{
"_id" : 169,
"value" : 100
},
{
"_id" : 170,
"value" : 100
},
{
"_id" : 171,
"value" : 100
},
{
"_id" : 172,
"value" : 100
},
{
"_id" : 173,
"value" : 100
},
{
"_id" : 174,
"value" : 100
},
{
"_id" : 175,
"value" : 100
},
{
"_id" : 176,
"value" : 100
},
{
"_id" : 177,
"value" : 100
},
{
"_id" : 178,
"value" : 100
},
{
"_id" : 179,
"value" : 100
},
{
"_id" : 180,
"value" : 100
},
{
"_id" : 181,
"value" : 100
},
{
"_id" : 182,
"value" : 100
},
{
"_id" : 183,
"value" : 100
},
{
"_id" : 184,
"value" : 100
},
{
"_id" : 185,
"value" : 100
},
{
"_id" : 186,
"value" : 100
},
{
"_id" : 187,
"value" : 100
},
{
"_id" : 188,
"value" : 100
},
{
"_id" : 189,
"value" : 100
},
{
"_id" : 190,
"value" : 100
},
{
"_id" : 191,
"value" : 100
},
{
"_id" : 192,
"value" : 100
},
{
"_id" : 193,
"value" : 100
},
{
"_id" : 194,
"value" : 100
},
{
"_id" : 195,
"value" : 100
},
{
"_id" : 196,
"value" : 100
},
{
"_id" : 197,
"value" : 100
},
{
"_id" : 198,
"value" : 100
},
{
"_id" : 199,
"value" : 100
},
{
"_id" : 200,
"value" : 100
},
{
"_id" : 201,
"value" : 100
},
{
"_id" : 202,
"value" : 100
},
{
"_id" : 203,
"value" : 100
},
{
"_id" : 204,
"value" : 100
},
{
"_id" : 205,
"value" : 100
},
{
"_id" : 206,
"value" : 100
},
{
"_id" : 207,
"value" : 100
},
{
"_id" : 208,
"value" : 100
},
{
"_id" : 209,
"value" : 100
},
{
"_id" : 210,
"value" : 100
},
{
"_id" : 211,
"value" : 100
},
{
"_id" : 212,
"value" : 100
},
{
"_id" : 213,
"value" : 100
},
{
"_id" : 214,
"value" : 100
},
{
"_id" : 215,
"value" : 100
},
{
"_id" : 216,
"value" : 100
},
{
"_id" : 217,
"value" : 100
},
{
"_id" : 218,
"value" : 100
},
{
"_id" : 219,
"value" : 100
},
{
"_id" : 220,
"value" : 100
},
{
"_id" : 221,
"value" : 100
},
{
"_id" : 222,
"value" : 100
},
{
"_id" : 223,
"value" : 100
},
{
"_id" : 224,
"value" : 100
},
{
"_id" : 225,
"value" : 100
},
{
"_id" : 226,
"value" : 100
},
{
"_id" : 227,
"value" : 100
},
{
"_id" : 228,
"value" : 100
},
{
"_id" : 229,
"value" : 100
},
{
"_id" : 230,
"value" : 100
},
{
"_id" : 231,
"value" : 100
},
{
"_id" : 232,
"value" : 100
},
{
"_id" : 233,
"value" : 100
},
{
"_id" : 234,
"value" : 100
},
{
"_id" : 235,
"value" : 100
},
{
"_id" : 236,
"value" : 100
},
{
"_id" : 237,
"value" : 100
},
{
"_id" : 238,
"value" : 100
},
{
"_id" : 239,
"value" : 100
},
{
"_id" : 240,
"value" : 100
},
{
"_id" : 241,
"value" : 100
},
{
"_id" : 242,
"value" : 100
},
{
"_id" : 243,
"value" : 100
},
{
"_id" : 244,
"value" : 100
},
{
"_id" : 245,
"value" : 100
},
{
"_id" : 246,
"value" : 100
},
{
"_id" : 247,
"value" : 100
},
{
"_id" : 248,
"value" : 100
},
{
"_id" : 249,
"value" : 100
},
{
"_id" : 250,
"value" : 100
},
{
"_id" : 251,
"value" : 100
},
{
"_id" : 252,
"value" : 100
},
{
"_id" : 253,
"value" : 100
},
{
"_id" : 254,
"value" : 100
},
{
"_id" : 255,
"value" : 100
},
{
"_id" : 256,
"value" : 100
},
{
"_id" : 257,
"value" : 100
},
{
"_id" : 258,
"value" : 100
},
{
"_id" : 259,
"value" : 100
},
{
"_id" : 260,
"value" : 100
},
{
"_id" : 261,
"value" : 100
},
{
"_id" : 262,
"value" : 100
},
{
"_id" : 263,
"value" : 100
},
{
"_id" : 264,
"value" : 100
},
{
"_id" : 265,
"value" : 100
},
{
"_id" : 266,
"value" : 100
},
{
"_id" : 267,
"value" : 100
},
{
"_id" : 268,
"value" : 100
},
{
"_id" : 269,
"value" : 100
},
{
"_id" : 270,
"value" : 100
},
{
"_id" : 271,
"value" : 100
},
{
"_id" : 272,
"value" : 100
},
{
"_id" : 273,
"value" : 100
},
{
"_id" : 274,
"value" : 100
},
{
"_id" : 275,
"value" : 100
},
{
"_id" : 276,
"value" : 100
},
{
"_id" : 277,
"value" : 100
},
{
"_id" : 278,
"value" : 100
},
{
"_id" : 279,
"value" : 100
},
{
"_id" : 280,
"value" : 100
},
{
"_id" : 281,
"value" : 100
},
{
"_id" : 282,
"value" : 100
},
{
"_id" : 283,
"value" : 100
},
{
"_id" : 284,
"value" : 100
},
{
"_id" : 285,
"value" : 100
},
{
"_id" : 286,
"value" : 100
},
{
"_id" : 287,
"value" : 100
},
{
"_id" : 288,
"value" : 100
},
{
"_id" : 289,
"value" : 100
},
{
"_id" : 290,
"value" : 100
},
{
"_id" : 291,
"value" : 100
},
{
"_id" : 292,
"value" : 100
},
{
"_id" : 293,
"value" : 100
},
{
"_id" : 294,
"value" : 100
},
{
"_id" : 295,
"value" : 100
},
{
"_id" : 296,
"value" : 100
},
{
"_id" : 297,
"value" : 100
},
{
"_id" : 298,
"value" : 100
},
{
"_id" : 299,
"value" : 100
},
{
"_id" : 300,
"value" : 100
},
{
"_id" : 301,
"value" : 100
},
{
"_id" : 302,
"value" : 100
},
{
"_id" : 303,
"value" : 100
},
{
"_id" : 304,
"value" : 100
},
{
"_id" : 305,
"value" : 100
},
{
"_id" : 306,
"value" : 100
},
{
"_id" : 307,
"value" : 100
},
{
"_id" : 308,
"value" : 100
},
{
"_id" : 309,
"value" : 100
},
{
"_id" : 310,
"value" : 100
},
{
"_id" : 311,
"value" : 100
},
{
"_id" : 312,
"value" : 100
},
{
"_id" : 313,
"value" : 100
},
{
"_id" : 314,
"value" : 100
},
{
"_id" : 315,
"value" : 100
},
{
"_id" : 316,
"value" : 100
},
{
"_id" : 317,
"value" : 100
},
{
"_id" : 318,
"value" : 100
},
{
"_id" : 319,
"value" : 100
},
{
"_id" : 320,
"value" : 100
},
{
"_id" : 321,
"value" : 100
},
{
"_id" : 322,
"value" : 100
},
{
"_id" : 323,
"value" : 100
},
{
"_id" : 324,
"value" : 100
},
{
"_id" : 325,
"value" : 100
},
{
"_id" : 326,
"value" : 100
},
{
"_id" : 327,
"value" : 100
},
{
"_id" : 328,
"value" : 100
},
{
"_id" : 329,
"value" : 100
},
{
"_id" : 330,
"value" : 100
},
{
"_id" : 331,
"value" : 100
},
{
"_id" : 332,
"value" : 100
},
{
"_id" : 333,
"value" : 100
},
{
"_id" : 334,
"value" : 100
},
{
"_id" : 335,
"value" : 100
},
{
"_id" : 336,
"value" : 100
},
{
"_id" : 337,
"value" : 100
},
{
"_id" : 338,
"value" : 100
},
{
"_id" : 339,
"value" : 100
},
{
"_id" : 340,
"value" : 100
},
{
"_id" : 341,
"value" : 100
},
{
"_id" : 342,
"value" : 100
},
{
"_id" : 343,
"value" : 100
},
{
"_id" : 344,
"value" : 100
},
{
"_id" : 345,
"value" : 100
},
{
"_id" : 346,
"value" : 100
},
{
"_id" : 347,
"value" : 100
},
{
"_id" : 348,
"value" : 100
},
{
"_id" : 349,
"value" : 100
},
{
"_id" : 350,
"value" : 100
},
{
"_id" : 351,
"value" : 100
},
{
"_id" : 352,
"value" : 100
},
{
"_id" : 353,
"value" : 100
},
{
"_id" : 354,
"value" : 100
},
{
"_id" : 355,
"value" : 100
},
{
"_id" : 356,
"value" : 100
},
{
"_id" : 357,
"value" : 100
},
{
"_id" : 358,
"value" : 100
},
{
"_id" : 359,
"value" : 100
},
{
"_id" : 360,
"value" : 100
},
{
"_id" : 361,
"value" : 100
},
{
"_id" : 362,
"value" : 100
},
{
"_id" : 363,
"value" : 100
},
{
"_id" : 364,
"value" : 100
},
{
"_id" : 365,
"value" : 100
},
{
"_id" : 366,
"value" : 100
},
{
"_id" : 367,
"value" : 100
},
{
"_id" : 368,
"value" : 100
},
{
"_id" : 369,
"value" : 100
},
{
"_id" : 370,
"value" : 100
},
{
"_id" : 371,
"value" : 100
},
{
"_id" : 372,
"value" : 100
},
{
"_id" : 373,
"value" : 100
},
{
"_id" : 374,
"value" : 100
},
{
"_id" : 375,
"value" : 100
},
{
"_id" : 376,
"value" : 100
},
{
"_id" : 377,
"value" : 100
},
{
"_id" : 378,
"value" : 100
},
{
"_id" : 379,
"value" : 100
},
{
"_id" : 380,
"value" : 100
},
{
"_id" : 381,
"value" : 100
},
{
"_id" : 382,
"value" : 100
},
{
"_id" : 383,
"value" : 100
},
{
"_id" : 384,
"value" : 100
},
{
"_id" : 385,
"value" : 100
},
{
"_id" : 386,
"value" : 100
},
{
"_id" : 387,
"value" : 100
},
{
"_id" : 388,
"value" : 100
},
{
"_id" : 389,
"value" : 100
},
{
"_id" : 390,
"value" : 100
},
{
"_id" : 391,
"value" : 100
},
{
"_id" : 392,
"value" : 100
},
{
"_id" : 393,
"value" : 100
},
{
"_id" : 394,
"value" : 100
},
{
"_id" : 395,
"value" : 100
},
{
"_id" : 396,
"value" : 100
},
{
"_id" : 397,
"value" : 100
},
{
"_id" : 398,
"value" : 100
},
{
"_id" : 399,
"value" : 100
},
{
"_id" : 400,
"value" : 100
},
{
"_id" : 401,
"value" : 100
},
{
"_id" : 402,
"value" : 100
},
{
"_id" : 403,
"value" : 100
},
{
"_id" : 404,
"value" : 100
},
{
"_id" : 405,
"value" : 100
},
{
"_id" : 406,
"value" : 100
},
{
"_id" : 407,
"value" : 100
},
{
"_id" : 408,
"value" : 100
},
{
"_id" : 409,
"value" : 100
},
{
"_id" : 410,
"value" : 100
},
{
"_id" : 411,
"value" : 100
},
{
"_id" : 412,
"value" : 100
},
{
"_id" : 413,
"value" : 100
},
{
"_id" : 414,
"value" : 100
},
{
"_id" : 415,
"value" : 100
},
{
"_id" : 416,
"value" : 100
},
{
"_id" : 417,
"value" : 100
},
{
"_id" : 418,
"value" : 100
},
{
"_id" : 419,
"value" : 100
},
{
"_id" : 420,
"value" : 100
},
{
"_id" : 421,
"value" : 100
},
{
"_id" : 422,
"value" : 100
},
{
"_id" : 423,
"value" : 100
},
{
"_id" : 424,
"value" : 100
},
{
"_id" : 425,
"value" : 100
},
{
"_id" : 426,
"value" : 100
},
{
"_id" : 427,
"value" : 100
},
{
"_id" : 428,
"value" : 100
},
{
"_id" : 429,
"value" : 100
},
{
"_id" : 430,
"value" : 100
},
{
"_id" : 431,
"value" : 100
},
{
"_id" : 432,
"value" : 100
},
{
"_id" : 433,
"value" : 100
},
{
"_id" : 434,
"value" : 100
},
{
"_id" : 435,
"value" : 100
},
{
"_id" : 436,
"value" : 100
},
{
"_id" : 437,
"value" : 100
},
{
"_id" : 438,
"value" : 100
},
{
"_id" : 439,
"value" : 100
},
{
"_id" : 440,
"value" : 100
},
{
"_id" : 441,
"value" : 100
},
{
"_id" : 442,
"value" : 100
},
{
"_id" : 443,
"value" : 100
},
{
"_id" : 444,
"value" : 100
},
{
"_id" : 445,
"value" : 100
},
{
"_id" : 446,
"value" : 100
},
{
"_id" : 447,
"value" : 100
},
{
"_id" : 448,
"value" : 100
},
{
"_id" : 449,
"value" : 100
},
{
"_id" : 450,
"value" : 100
},
{
"_id" : 451,
"value" : 100
},
{
"_id" : 452,
"value" : 100
},
{
"_id" : 453,
"value" : 100
},
{
"_id" : 454,
"value" : 100
},
{
"_id" : 455,
"value" : 100
},
{
"_id" : 456,
"value" : 100
},
{
"_id" : 457,
"value" : 100
},
{
"_id" : 458,
"value" : 100
},
{
"_id" : 459,
"value" : 100
},
{
"_id" : 460,
"value" : 100
},
{
"_id" : 461,
"value" : 100
},
{
"_id" : 462,
"value" : 100
},
{
"_id" : 463,
"value" : 100
},
{
"_id" : 464,
"value" : 100
},
{
"_id" : 465,
"value" : 100
},
{
"_id" : 466,
"value" : 100
},
{
"_id" : 467,
"value" : 100
},
{
"_id" : 468,
"value" : 100
},
{
"_id" : 469,
"value" : 100
},
{
"_id" : 470,
"value" : 100
},
{
"_id" : 471,
"value" : 100
},
{
"_id" : 472,
"value" : 100
},
{
"_id" : 473,
"value" : 100
},
{
"_id" : 474,
"value" : 100
},
{
"_id" : 475,
"value" : 100
},
{
"_id" : 476,
"value" : 100
},
{
"_id" : 477,
"value" : 100
},
{
"_id" : 478,
"value" : 100
},
{
"_id" : 479,
"value" : 100
},
{
"_id" : 480,
"value" : 100
},
{
"_id" : 481,
"value" : 100
},
{
"_id" : 482,
"value" : 100
},
{
"_id" : 483,
"value" : 100
},
{
"_id" : 484,
"value" : 100
},
{
"_id" : 485,
"value" : 100
},
{
"_id" : 486,
"value" : 100
},
{
"_id" : 487,
"value" : 100
},
{
"_id" : 488,
"value" : 100
},
{
"_id" : 489,
"value" : 100
},
{
"_id" : 490,
"value" : 100
},
{
"_id" : 491,
"value" : 100
},
{
"_id" : 492,
"value" : 100
},
{
"_id" : 493,
"value" : 100
},
{
"_id" : 494,
"value" : 100
},
{
"_id" : 495,
"value" : 100
},
{
"_id" : 496,
"value" : 100
},
{
"_id" : 497,
"value" : 100
},
{
"_id" : 498,
"value" : 100
},
{
"_id" : 499,
"value" : 100
},
{
"_id" : 500,
"value" : 100
},
{
"_id" : 501,
"value" : 100
},
{
"_id" : 502,
"value" : 100
},
{
"_id" : 503,
"value" : 100
},
{
"_id" : 504,
"value" : 100
},
{
"_id" : 505,
"value" : 100
},
{
"_id" : 506,
"value" : 100
},
{
"_id" : 507,
"value" : 100
},
{
"_id" : 508,
"value" : 100
},
{
"_id" : 509,
"value" : 100
},
{
"_id" : 510,
"value" : 100
},
{
"_id" : 511,
"value" : 100
}
],
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1072,
"timing" : {
"shardProcessing" : 1057,
"postProcessing" : 14
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_6_inc
m30000| Thu Jun 14 01:41:22 [conn7] build index mrShard.tmp.mr.srcSharded_6_inc { 0: 1 }
m30000| Thu Jun 14 01:41:22 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:22 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_6
m30000| Thu Jun 14 01:41:22 [conn7] build index mrShard.tmp.mr.srcSharded_6 { _id: 1 }
m30000| Thu Jun 14 01:41:22 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:23 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652482_11
m30000| Thu Jun 14 01:41:23 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_6
m30000| Thu Jun 14 01:41:23 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_6
m30000| Thu Jun 14 01:41:23 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_6_inc
m30000| Thu Jun 14 01:41:23 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652482_11", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:13073 r:1859851 w:815176 reslen:156 346ms
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_14_inc
m30001| Thu Jun 14 01:41:22 [conn3] build index mrShard.tmp.mr.srcSharded_14_inc { 0: 1 }
m30001| Thu Jun 14 01:41:22 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:22 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_14
m30001| Thu Jun 14 01:41:22 [conn3] build index mrShard.tmp.mr.srcSharded_14 { _id: 1 }
m30001| Thu Jun 14 01:41:22 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652482_11
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_14
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_14
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_14_inc
m30001| Thu Jun 14 01:41:24 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652482_11", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:29453 r:12304610 w:2666384 reslen:156 1025ms
m30999| Thu Jun 14 01:41:24 [conn] MR with single shard output, NS=mrShardOtherDB.mrReplaceInSharded primary=shard0000:localhost:30000
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_7
m30000| Thu Jun 14 01:41:24 [conn7] build index mrShardOtherDB.tmp.mr.srcSharded_7 { _id: 1 }
m30000| Thu Jun 14 01:41:24 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShardOtherDB.mrReplaceInSharded
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_7
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_7
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_7
m30000| Thu Jun 14 01:41:24 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652482_11
m30001| Thu Jun 14 01:41:24 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652482_11
{
"result" : {
"db" : "mrShardOtherDB",
"collection" : "mrReplaceInSharded"
},
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1055,
"timing" : {
"shardProcessing" : 1026,
"postProcessing" : 29
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30000" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_8_inc
m30000| Thu Jun 14 01:41:24 [conn7] build index mrShard.tmp.mr.srcSharded_8_inc { 0: 1 }
m30000| Thu Jun 14 01:41:24 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_8
m30000| Thu Jun 14 01:41:24 [conn7] build index mrShard.tmp.mr.srcSharded_8 { _id: 1 }
m30000| Thu Jun 14 01:41:24 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652484_12
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_8
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_8
m30000| Thu Jun 14 01:41:24 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_8_inc
m30000| Thu Jun 14 01:41:24 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652484_12", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:16776 r:2281443 w:840786 reslen:172 488ms
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_15_inc
m30001| Thu Jun 14 01:41:24 [conn3] build index mrShard.tmp.mr.srcSharded_15_inc { 0: 1 }
m30001| Thu Jun 14 01:41:24 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:24 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_15
m30001| Thu Jun 14 01:41:24 [conn3] build index mrShard.tmp.mr.srcSharded_15 { _id: 1 }
m30001| Thu Jun 14 01:41:24 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:25 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:41:25 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652455:1804289383', sleeping for 30000ms
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652484_12
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_15
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_15
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_15_inc
m30001| Thu Jun 14 01:41:25 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652484_12", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:32001 r:13209665 w:2680861 reslen:172 1108ms
m30999| Thu Jun 14 01:41:25 [conn] MR with sharded output, NS=mrShard.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:25 [conn] enable sharding on: mrShard.mrReplaceInShardedOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:25 [conn] going to create 1 chunk(s) for: mrShard.mrReplaceInShardedOutSharded using new epoch 4fd97985607081b222f40295
m30001| Thu Jun 14 01:41:25 [conn2] build index mrShard.mrReplaceInShardedOutSharded { _id: 1 }
m30001| Thu Jun 14 01:41:25 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:25 [conn2] info: creating collection mrShard.mrReplaceInShardedOutSharded on add index
m30999| Thu Jun 14 01:41:25 [conn] ChunkManager: time to load chunks for mrShard.mrReplaceInShardedOutSharded: 0ms sequenceNumber: 8 version: 1|0||4fd97985607081b222f40295 based on: (empty)
m30999| Thu Jun 14 01:41:25 [conn] resetting shard version of mrShard.mrReplaceInShardedOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrReplaceInShardedOutSharded { setShardVersion: "mrShard.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReplaceInShardedOutSharded { setShardVersion: "mrShard.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97985607081b222f40295'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrReplaceInShardedOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrReplaceInShardedOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReplaceInShardedOutSharded { setShardVersion: "mrShard.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97985607081b222f40295'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:25 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:25 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:25 [conn] created new distributed lock for mrShard.mrReplaceInShardedOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:25 [conn] inserting initial doc in config.locks for lock mrShard.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:25 [conn] about to acquire distributed lock 'mrShard.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:25 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97985607081b222f40296" } }
m30999| { "_id" : "mrShard.mrReplaceInShardedOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:25 [conn] distributed lock 'mrShard.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97985607081b222f40296
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_16
m30001| Thu Jun 14 01:41:25 [conn3] build index mrShard.tmp.mr.srcSharded_16 { _id: 1 }
m30001| Thu Jun 14 01:41:25 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.mrReplaceInShardedOutSharded
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_16
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_16
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_16
m30999| Thu Jun 14 01:41:25 [conn] distributed lock 'mrShard.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:41:25 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652484_12
m30001| Thu Jun 14 01:41:25 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652484_12
{
"result" : "mrReplaceInShardedOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1156,
"timing" : {
"shardProcessing" : 1121,
"postProcessing" : 34
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_9_inc
m30000| Thu Jun 14 01:41:25 [conn7] build index mrShard.tmp.mr.srcSharded_9_inc { 0: 1 }
m30000| Thu Jun 14 01:41:25 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_9
m30000| Thu Jun 14 01:41:25 [conn7] build index mrShard.tmp.mr.srcSharded_9 { _id: 1 }
m30000| Thu Jun 14 01:41:25 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652485_13
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_9
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_9
m30000| Thu Jun 14 01:41:25 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_9_inc
m30000| Thu Jun 14 01:41:25 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652485_13", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:18697 r:2560585 w:855298 reslen:172 375ms
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_17_inc
m30001| Thu Jun 14 01:41:25 [conn3] build index mrShard.tmp.mr.srcSharded_17_inc { 0: 1 }
m30001| Thu Jun 14 01:41:25 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:25 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_17
m30001| Thu Jun 14 01:41:25 [conn3] build index mrShard.tmp.mr.srcSharded_17 { _id: 1 }
m30001| Thu Jun 14 01:41:25 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652485_13
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_17
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_17
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_17_inc
m30001| Thu Jun 14 01:41:26 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652485_13", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:36590 r:14117180 w:2705735 reslen:172 1057ms
m30999| Thu Jun 14 01:41:26 [conn] MR with sharded output, NS=mrShard.mrMergeInShardedOutSharded
m30999| Thu Jun 14 01:41:26 [conn] enable sharding on: mrShard.mrMergeInShardedOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:26 [conn] going to create 1 chunk(s) for: mrShard.mrMergeInShardedOutSharded using new epoch 4fd97986607081b222f40297
m30001| Thu Jun 14 01:41:26 [conn2] build index mrShard.mrMergeInShardedOutSharded { _id: 1 }
m30001| Thu Jun 14 01:41:26 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:26 [conn2] info: creating collection mrShard.mrMergeInShardedOutSharded on add index
m30999| Thu Jun 14 01:41:26 [conn] ChunkManager: time to load chunks for mrShard.mrMergeInShardedOutSharded: 0ms sequenceNumber: 9 version: 1|0||4fd97986607081b222f40297 based on: (empty)
m30999| Thu Jun 14 01:41:26 [conn] resetting shard version of mrShard.mrMergeInShardedOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrMergeInShardedOutSharded { setShardVersion: "mrShard.mrMergeInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrMergeInShardedOutSharded { setShardVersion: "mrShard.mrMergeInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97986607081b222f40297'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrMergeInShardedOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrMergeInShardedOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrMergeInShardedOutSharded { setShardVersion: "mrShard.mrMergeInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97986607081b222f40297'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:26 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:26 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:26 [conn] created new distributed lock for mrShard.mrMergeInShardedOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:26 [conn] inserting initial doc in config.locks for lock mrShard.mrMergeInShardedOutSharded
m30999| Thu Jun 14 01:41:26 [conn] about to acquire distributed lock 'mrShard.mrMergeInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:26 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97986607081b222f40298" } }
m30999| { "_id" : "mrShard.mrMergeInShardedOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:26 [conn] distributed lock 'mrShard.mrMergeInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97986607081b222f40298
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_18
m30001| Thu Jun 14 01:41:26 [conn3] build index mrShard.tmp.mr.srcSharded_18 { _id: 1 }
m30001| Thu Jun 14 01:41:26 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.mrMergeInShardedOutSharded
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_18
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_18
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_18
m30999| Thu Jun 14 01:41:26 [conn] distributed lock 'mrShard.mrMergeInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:41:26 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652485_13
m30001| Thu Jun 14 01:41:26 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652485_13
{
"result" : "mrMergeInShardedOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1093,
"timing" : {
"shardProcessing" : 1058,
"postProcessing" : 34
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_10_inc
m30000| Thu Jun 14 01:41:26 [conn7] build index mrShard.tmp.mr.srcSharded_10_inc { 0: 1 }
m30000| Thu Jun 14 01:41:26 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_10
m30000| Thu Jun 14 01:41:26 [conn7] build index mrShard.tmp.mr.srcSharded_10 { _id: 1 }
m30000| Thu Jun 14 01:41:26 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652486_14
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_10
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_10
m30000| Thu Jun 14 01:41:26 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_10_inc
m30000| Thu Jun 14 01:41:26 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652486_14", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:20652 r:2840767 w:869794 reslen:172 346ms
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_19_inc
m30001| Thu Jun 14 01:41:26 [conn3] build index mrShard.tmp.mr.srcSharded_19_inc { 0: 1 }
m30001| Thu Jun 14 01:41:26 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:26 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_19
m30001| Thu Jun 14 01:41:26 [conn3] build index mrShard.tmp.mr.srcSharded_19 { _id: 1 }
m30001| Thu Jun 14 01:41:26 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652486_14
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_19
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_19
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_19_inc
m30001| Thu Jun 14 01:41:27 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652486_14", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:41123 r:14993449 w:2730790 reslen:172 1028ms
m30999| Thu Jun 14 01:41:27 [conn] MR with sharded output, NS=mrShard.mrReduceInShardedOutSharded
m30999| Thu Jun 14 01:41:27 [conn] enable sharding on: mrShard.mrReduceInShardedOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:27 [conn] going to create 1 chunk(s) for: mrShard.mrReduceInShardedOutSharded using new epoch 4fd97987607081b222f40299
m30001| Thu Jun 14 01:41:27 [conn2] build index mrShard.mrReduceInShardedOutSharded { _id: 1 }
m30001| Thu Jun 14 01:41:27 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:27 [conn2] info: creating collection mrShard.mrReduceInShardedOutSharded on add index
m30999| Thu Jun 14 01:41:27 [conn] ChunkManager: time to load chunks for mrShard.mrReduceInShardedOutSharded: 0ms sequenceNumber: 10 version: 1|0||4fd97987607081b222f40299 based on: (empty)
m30999| Thu Jun 14 01:41:27 [conn] resetting shard version of mrShard.mrReduceInShardedOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrReduceInShardedOutSharded { setShardVersion: "mrShard.mrReduceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReduceInShardedOutSharded { setShardVersion: "mrShard.mrReduceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97987607081b222f40299'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrReduceInShardedOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrReduceInShardedOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReduceInShardedOutSharded { setShardVersion: "mrShard.mrReduceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97987607081b222f40299'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:27 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:27 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:27 [conn] created new distributed lock for mrShard.mrReduceInShardedOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:27 [conn] inserting initial doc in config.locks for lock mrShard.mrReduceInShardedOutSharded
m30999| Thu Jun 14 01:41:27 [conn] about to acquire distributed lock 'mrShard.mrReduceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:27 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97987607081b222f4029a" } }
m30999| { "_id" : "mrShard.mrReduceInShardedOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:27 [conn] distributed lock 'mrShard.mrReduceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97987607081b222f4029a
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_20
m30001| Thu Jun 14 01:41:27 [conn3] build index mrShard.tmp.mr.srcSharded_20 { _id: 1 }
m30001| Thu Jun 14 01:41:27 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.mrReduceInShardedOutSharded
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_20
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_20
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_20
m30999| Thu Jun 14 01:41:27 [conn] distributed lock 'mrShard.mrReduceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:41:27 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652486_14
m30001| Thu Jun 14 01:41:27 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652486_14
{
"result" : "mrReduceInShardedOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1063,
"timing" : {
"shardProcessing" : 1028,
"postProcessing" : 35
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_11_inc
m30000| Thu Jun 14 01:41:27 [conn7] build index mrShard.tmp.mr.srcSharded_11_inc { 0: 1 }
m30000| Thu Jun 14 01:41:27 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_11
m30000| Thu Jun 14 01:41:27 [conn7] build index mrShard.tmp.mr.srcSharded_11 { _id: 1 }
m30000| Thu Jun 14 01:41:27 [conn7] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_21_inc
m30001| Thu Jun 14 01:41:27 [conn3] build index mrShard.tmp.mr.srcSharded_21_inc { 0: 1 }
m30001| Thu Jun 14 01:41:27 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:27 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_21
m30001| Thu Jun 14 01:41:27 [conn3] build index mrShard.tmp.mr.srcSharded_21 { _id: 1 }
m30001| Thu Jun 14 01:41:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652487_15
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_11
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_11
m30000| Thu Jun 14 01:41:27 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_11_inc
m30000| Thu Jun 14 01:41:27 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652487_15", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:22515 r:3172614 w:884285 reslen:172 398ms
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652487_15
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_21
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_21
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_21_inc
m30001| Thu Jun 14 01:41:28 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652487_15", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:45811 r:15950775 w:2755926 reslen:172 1116ms
m30999| Thu Jun 14 01:41:28 [conn] MR with sharded output, NS=mrShard.
m30999| Thu Jun 14 01:41:28 [conn] enable sharding on: mrShard. with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:28 [conn] going to create 1 chunk(s) for: mrShard. using new epoch 4fd97988607081b222f4029b
m30001| Thu Jun 14 01:41:28 [conn2] Assertion: 10356:invalid ns: mrShard.
m30001| 0x8800c8a 0x819da45 0x874d4b7 0x874d666 0x85e9340 0x871f1ea 0x85ed4f3 0x85eefcd 0x85b4a0d 0x85b6857 0x85be6b6 0x818d455 0x87d2c25 0x939542 0x2ceb6e
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8800c8a]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo10logContextEPKc+0xa5) [0x819da45]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11msgassertedEiPKc+0xc7) [0x874d4b7]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod [0x874d666]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo12userCreateNSEPKcNS_7BSONObjERSsbPb+0x170) [0x85e9340]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo19prepareToBuildIndexERKNS_7BSONObjEbRSsRPNS_16NamespaceDetailsERS0_+0x140a) [0x871f1ea]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbPb+0x7a3) [0x85ed4f3]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo11DataFileMgr16insertWithObjModEPKcRNS_7BSONObjEb+0x5d) [0x85eefcd]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo14checkAndInsertEPKcRNS_7BSONObjE+0xad) [0x85b4a0d]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo14receivedInsertERNS_7MessageERNS_5CurOpE+0x3e7) [0x85b6857]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x13a6) [0x85be6b6]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x85) [0x818d455]
m30001| /mnt/slaves/Linux_32bit/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x2d5) [0x87d2c25]
m30001| /lib/i686/nosegneg/libpthread.so.0 [0x939542]
m30001| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x2ceb6e]
m30001| Thu Jun 14 01:41:28 [conn2] insert mrShard.system.indexes keyUpdates:0 exception: invalid ns: mrShard. code:10356 locks(micros) R:8 W:72 r:77200 w:1733051 2ms
m30999| Thu Jun 14 01:41:28 [conn] ChunkManager: time to load chunks for mrShard.: 0ms sequenceNumber: 11 version: 1|0||4fd97988607081b222f4029b based on: (empty)
m30999| Thu Jun 14 01:41:28 [conn] setShardVersion shard0001 localhost:30001 mrShard. { setShardVersion: "mrShard.", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97988607081b222f4029b'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:28 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.", need_authoritative: true, errmsg: "first time for collection 'mrShard.'", ok: 0.0 }
m30999| Thu Jun 14 01:41:28 [conn] setShardVersion shard0001 localhost:30001 mrShard. { setShardVersion: "mrShard.", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97988607081b222f4029b'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:28 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:28 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:28 [conn] created new distributed lock for mrShard. on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:28 [conn] inserting initial doc in config.locks for lock mrShard.
m30999| Thu Jun 14 01:41:28 [conn] about to acquire distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:28 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97988607081b222f4029c" } }
m30999| { "_id" : "mrShard.",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:28 [conn] distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97988607081b222f4029c
m30999| Thu Jun 14 01:41:28 [conn] distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:41:28 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652487_15
m30001| Thu Jun 14 01:41:28 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652487_15
{
"results" : [
{
"_id" : 0,
"value" : 100
},
{
"_id" : 1,
"value" : 100
},
{
"_id" : 2,
"value" : 100
},
{
"_id" : 3,
"value" : 100
},
{
"_id" : 4,
"value" : 100
},
{
"_id" : 5,
"value" : 100
},
{
"_id" : 6,
"value" : 100
},
{
"_id" : 7,
"value" : 100
},
{
"_id" : 8,
"value" : 100
},
{
"_id" : 9,
"value" : 100
},
{
"_id" : 10,
"value" : 100
},
{
"_id" : 11,
"value" : 100
},
{
"_id" : 12,
"value" : 100
},
{
"_id" : 13,
"value" : 100
},
{
"_id" : 14,
"value" : 100
},
{
"_id" : 15,
"value" : 100
},
{
"_id" : 16,
"value" : 100
},
{
"_id" : 17,
"value" : 100
},
{
"_id" : 18,
"value" : 100
},
{
"_id" : 19,
"value" : 100
},
{
"_id" : 20,
"value" : 100
},
{
"_id" : 21,
"value" : 100
},
{
"_id" : 22,
"value" : 100
},
{
"_id" : 23,
"value" : 100
},
{
"_id" : 24,
"value" : 100
},
{
"_id" : 25,
"value" : 100
},
{
"_id" : 26,
"value" : 100
},
{
"_id" : 27,
"value" : 100
},
{
"_id" : 28,
"value" : 100
},
{
"_id" : 29,
"value" : 100
},
{
"_id" : 30,
"value" : 100
},
{
"_id" : 31,
"value" : 100
},
{
"_id" : 32,
"value" : 100
},
{
"_id" : 33,
"value" : 100
},
{
"_id" : 34,
"value" : 100
},
{
"_id" : 35,
"value" : 100
},
{
"_id" : 36,
"value" : 100
},
{
"_id" : 37,
"value" : 100
},
{
"_id" : 38,
"value" : 100
},
{
"_id" : 39,
"value" : 100
},
{
"_id" : 40,
"value" : 100
},
{
"_id" : 41,
"value" : 100
},
{
"_id" : 42,
"value" : 100
},
{
"_id" : 43,
"value" : 100
},
{
"_id" : 44,
"value" : 100
},
{
"_id" : 45,
"value" : 100
},
{
"_id" : 46,
"value" : 100
},
{
"_id" : 47,
"value" : 100
},
{
"_id" : 48,
"value" : 100
},
{
"_id" : 49,
"value" : 100
},
{
"_id" : 50,
"value" : 100
},
{
"_id" : 51,
"value" : 100
},
{
"_id" : 52,
"value" : 100
},
{
"_id" : 53,
"value" : 100
},
{
"_id" : 54,
"value" : 100
},
{
"_id" : 55,
"value" : 100
},
{
"_id" : 56,
"value" : 100
},
{
"_id" : 57,
"value" : 100
},
{
"_id" : 58,
"value" : 100
},
{
"_id" : 59,
"value" : 100
},
{
"_id" : 60,
"value" : 100
},
{
"_id" : 61,
"value" : 100
},
{
"_id" : 62,
"value" : 100
},
{
"_id" : 63,
"value" : 100
},
{
"_id" : 64,
"value" : 100
},
{
"_id" : 65,
"value" : 100
},
{
"_id" : 66,
"value" : 100
},
{
"_id" : 67,
"value" : 100
},
{
"_id" : 68,
"value" : 100
},
{
"_id" : 69,
"value" : 100
},
{
"_id" : 70,
"value" : 100
},
{
"_id" : 71,
"value" : 100
},
{
"_id" : 72,
"value" : 100
},
{
"_id" : 73,
"value" : 100
},
{
"_id" : 74,
"value" : 100
},
{
"_id" : 75,
"value" : 100
},
{
"_id" : 76,
"value" : 100
},
{
"_id" : 77,
"value" : 100
},
{
"_id" : 78,
"value" : 100
},
{
"_id" : 79,
"value" : 100
},
{
"_id" : 80,
"value" : 100
},
{
"_id" : 81,
"value" : 100
},
{
"_id" : 82,
"value" : 100
},
{
"_id" : 83,
"value" : 100
},
{
"_id" : 84,
"value" : 100
},
{
"_id" : 85,
"value" : 100
},
{
"_id" : 86,
"value" : 100
},
{
"_id" : 87,
"value" : 100
},
{
"_id" : 88,
"value" : 100
},
{
"_id" : 89,
"value" : 100
},
{
"_id" : 90,
"value" : 100
},
{
"_id" : 91,
"value" : 100
},
{
"_id" : 92,
"value" : 100
},
{
"_id" : 93,
"value" : 100
},
{
"_id" : 94,
"value" : 100
},
{
"_id" : 95,
"value" : 100
},
{
"_id" : 96,
"value" : 100
},
{
"_id" : 97,
"value" : 100
},
{
"_id" : 98,
"value" : 100
},
{
"_id" : 99,
"value" : 100
},
{
"_id" : 100,
"value" : 100
},
{
"_id" : 101,
"value" : 100
},
{
"_id" : 102,
"value" : 100
},
{
"_id" : 103,
"value" : 100
},
{
"_id" : 104,
"value" : 100
},
{
"_id" : 105,
"value" : 100
},
{
"_id" : 106,
"value" : 100
},
{
"_id" : 107,
"value" : 100
},
{
"_id" : 108,
"value" : 100
},
{
"_id" : 109,
"value" : 100
},
{
"_id" : 110,
"value" : 100
},
{
"_id" : 111,
"value" : 100
},
{
"_id" : 112,
"value" : 100
},
{
"_id" : 113,
"value" : 100
},
{
"_id" : 114,
"value" : 100
},
{
"_id" : 115,
"value" : 100
},
{
"_id" : 116,
"value" : 100
},
{
"_id" : 117,
"value" : 100
},
{
"_id" : 118,
"value" : 100
},
{
"_id" : 119,
"value" : 100
},
{
"_id" : 120,
"value" : 100
},
{
"_id" : 121,
"value" : 100
},
{
"_id" : 122,
"value" : 100
},
{
"_id" : 123,
"value" : 100
},
{
"_id" : 124,
"value" : 100
},
{
"_id" : 125,
"value" : 100
},
{
"_id" : 126,
"value" : 100
},
{
"_id" : 127,
"value" : 100
},
{
"_id" : 128,
"value" : 100
},
{
"_id" : 129,
"value" : 100
},
{
"_id" : 130,
"value" : 100
},
{
"_id" : 131,
"value" : 100
},
{
"_id" : 132,
"value" : 100
},
{
"_id" : 133,
"value" : 100
},
{
"_id" : 134,
"value" : 100
},
{
"_id" : 135,
"value" : 100
},
{
"_id" : 136,
"value" : 100
},
{
"_id" : 137,
"value" : 100
},
{
"_id" : 138,
"value" : 100
},
{
"_id" : 139,
"value" : 100
},
{
"_id" : 140,
"value" : 100
},
{
"_id" : 141,
"value" : 100
},
{
"_id" : 142,
"value" : 100
},
{
"_id" : 143,
"value" : 100
},
{
"_id" : 144,
"value" : 100
},
{
"_id" : 145,
"value" : 100
},
{
"_id" : 146,
"value" : 100
},
{
"_id" : 147,
"value" : 100
},
{
"_id" : 148,
"value" : 100
},
{
"_id" : 149,
"value" : 100
},
{
"_id" : 150,
"value" : 100
},
{
"_id" : 151,
"value" : 100
},
{
"_id" : 152,
"value" : 100
},
{
"_id" : 153,
"value" : 100
},
{
"_id" : 154,
"value" : 100
},
{
"_id" : 155,
"value" : 100
},
{
"_id" : 156,
"value" : 100
},
{
"_id" : 157,
"value" : 100
},
{
"_id" : 158,
"value" : 100
},
{
"_id" : 159,
"value" : 100
},
{
"_id" : 160,
"value" : 100
},
{
"_id" : 161,
"value" : 100
},
{
"_id" : 162,
"value" : 100
},
{
"_id" : 163,
"value" : 100
},
{
"_id" : 164,
"value" : 100
},
{
"_id" : 165,
"value" : 100
},
{
"_id" : 166,
"value" : 100
},
{
"_id" : 167,
"value" : 100
},
{
"_id" : 168,
"value" : 100
},
{
"_id" : 169,
"value" : 100
},
{
"_id" : 170,
"value" : 100
},
{
"_id" : 171,
"value" : 100
},
{
"_id" : 172,
"value" : 100
},
{
"_id" : 173,
"value" : 100
},
{
"_id" : 174,
"value" : 100
},
{
"_id" : 175,
"value" : 100
},
{
"_id" : 176,
"value" : 100
},
{
"_id" : 177,
"value" : 100
},
{
"_id" : 178,
"value" : 100
},
{
"_id" : 179,
"value" : 100
},
{
"_id" : 180,
"value" : 100
},
{
"_id" : 181,
"value" : 100
},
{
"_id" : 182,
"value" : 100
},
{
"_id" : 183,
"value" : 100
},
{
"_id" : 184,
"value" : 100
},
{
"_id" : 185,
"value" : 100
},
{
"_id" : 186,
"value" : 100
},
{
"_id" : 187,
"value" : 100
},
{
"_id" : 188,
"value" : 100
},
{
"_id" : 189,
"value" : 100
},
{
"_id" : 190,
"value" : 100
},
{
"_id" : 191,
"value" : 100
},
{
"_id" : 192,
"value" : 100
},
{
"_id" : 193,
"value" : 100
},
{
"_id" : 194,
"value" : 100
},
{
"_id" : 195,
"value" : 100
},
{
"_id" : 196,
"value" : 100
},
{
"_id" : 197,
"value" : 100
},
{
"_id" : 198,
"value" : 100
},
{
"_id" : 199,
"value" : 100
},
{
"_id" : 200,
"value" : 100
},
{
"_id" : 201,
"value" : 100
},
{
"_id" : 202,
"value" : 100
},
{
"_id" : 203,
"value" : 100
},
{
"_id" : 204,
"value" : 100
},
{
"_id" : 205,
"value" : 100
},
{
"_id" : 206,
"value" : 100
},
{
"_id" : 207,
"value" : 100
},
{
"_id" : 208,
"value" : 100
},
{
"_id" : 209,
"value" : 100
},
{
"_id" : 210,
"value" : 100
},
{
"_id" : 211,
"value" : 100
},
{
"_id" : 212,
"value" : 100
},
{
"_id" : 213,
"value" : 100
},
{
"_id" : 214,
"value" : 100
},
{
"_id" : 215,
"value" : 100
},
{
"_id" : 216,
"value" : 100
},
{
"_id" : 217,
"value" : 100
},
{
"_id" : 218,
"value" : 100
},
{
"_id" : 219,
"value" : 100
},
{
"_id" : 220,
"value" : 100
},
{
"_id" : 221,
"value" : 100
},
{
"_id" : 222,
"value" : 100
},
{
"_id" : 223,
"value" : 100
},
{
"_id" : 224,
"value" : 100
},
{
"_id" : 225,
"value" : 100
},
{
"_id" : 226,
"value" : 100
},
{
"_id" : 227,
"value" : 100
},
{
"_id" : 228,
"value" : 100
},
{
"_id" : 229,
"value" : 100
},
{
"_id" : 230,
"value" : 100
},
{
"_id" : 231,
"value" : 100
},
{
"_id" : 232,
"value" : 100
},
{
"_id" : 233,
"value" : 100
},
{
"_id" : 234,
"value" : 100
},
{
"_id" : 235,
"value" : 100
},
{
"_id" : 236,
"value" : 100
},
{
"_id" : 237,
"value" : 100
},
{
"_id" : 238,
"value" : 100
},
{
"_id" : 239,
"value" : 100
},
{
"_id" : 240,
"value" : 100
},
{
"_id" : 241,
"value" : 100
},
{
"_id" : 242,
"value" : 100
},
{
"_id" : 243,
"value" : 100
},
{
"_id" : 244,
"value" : 100
},
{
"_id" : 245,
"value" : 100
},
{
"_id" : 246,
"value" : 100
},
{
"_id" : 247,
"value" : 100
},
{
"_id" : 248,
"value" : 100
},
{
"_id" : 249,
"value" : 100
},
{
"_id" : 250,
"value" : 100
},
{
"_id" : 251,
"value" : 100
},
{
"_id" : 252,
"value" : 100
},
{
"_id" : 253,
"value" : 100
},
{
"_id" : 254,
"value" : 100
},
{
"_id" : 255,
"value" : 100
},
{
"_id" : 256,
"value" : 100
},
{
"_id" : 257,
"value" : 100
},
{
"_id" : 258,
"value" : 100
},
{
"_id" : 259,
"value" : 100
},
{
"_id" : 260,
"value" : 100
},
{
"_id" : 261,
"value" : 100
},
{
"_id" : 262,
"value" : 100
},
{
"_id" : 263,
"value" : 100
},
{
"_id" : 264,
"value" : 100
},
{
"_id" : 265,
"value" : 100
},
{
"_id" : 266,
"value" : 100
},
{
"_id" : 267,
"value" : 100
},
{
"_id" : 268,
"value" : 100
},
{
"_id" : 269,
"value" : 100
},
{
"_id" : 270,
"value" : 100
},
{
"_id" : 271,
"value" : 100
},
{
"_id" : 272,
"value" : 100
},
{
"_id" : 273,
"value" : 100
},
{
"_id" : 274,
"value" : 100
},
{
"_id" : 275,
"value" : 100
},
{
"_id" : 276,
"value" : 100
},
{
"_id" : 277,
"value" : 100
},
{
"_id" : 278,
"value" : 100
},
{
"_id" : 279,
"value" : 100
},
{
"_id" : 280,
"value" : 100
},
{
"_id" : 281,
"value" : 100
},
{
"_id" : 282,
"value" : 100
},
{
"_id" : 283,
"value" : 100
},
{
"_id" : 284,
"value" : 100
},
{
"_id" : 285,
"value" : 100
},
{
"_id" : 286,
"value" : 100
},
{
"_id" : 287,
"value" : 100
},
{
"_id" : 288,
"value" : 100
},
{
"_id" : 289,
"value" : 100
},
{
"_id" : 290,
"value" : 100
},
{
"_id" : 291,
"value" : 100
},
{
"_id" : 292,
"value" : 100
},
{
"_id" : 293,
"value" : 100
},
{
"_id" : 294,
"value" : 100
},
{
"_id" : 295,
"value" : 100
},
{
"_id" : 296,
"value" : 100
},
{
"_id" : 297,
"value" : 100
},
{
"_id" : 298,
"value" : 100
},
{
"_id" : 299,
"value" : 100
},
{
"_id" : 300,
"value" : 100
},
{
"_id" : 301,
"value" : 100
},
{
"_id" : 302,
"value" : 100
},
{
"_id" : 303,
"value" : 100
},
{
"_id" : 304,
"value" : 100
},
{
"_id" : 305,
"value" : 100
},
{
"_id" : 306,
"value" : 100
},
{
"_id" : 307,
"value" : 100
},
{
"_id" : 308,
"value" : 100
},
{
"_id" : 309,
"value" : 100
},
{
"_id" : 310,
"value" : 100
},
{
"_id" : 311,
"value" : 100
},
{
"_id" : 312,
"value" : 100
},
{
"_id" : 313,
"value" : 100
},
{
"_id" : 314,
"value" : 100
},
{
"_id" : 315,
"value" : 100
},
{
"_id" : 316,
"value" : 100
},
{
"_id" : 317,
"value" : 100
},
{
"_id" : 318,
"value" : 100
},
{
"_id" : 319,
"value" : 100
},
{
"_id" : 320,
"value" : 100
},
{
"_id" : 321,
"value" : 100
},
{
"_id" : 322,
"value" : 100
},
{
"_id" : 323,
"value" : 100
},
{
"_id" : 324,
"value" : 100
},
{
"_id" : 325,
"value" : 100
},
{
"_id" : 326,
"value" : 100
},
{
"_id" : 327,
"value" : 100
},
{
"_id" : 328,
"value" : 100
},
{
"_id" : 329,
"value" : 100
},
{
"_id" : 330,
"value" : 100
},
{
"_id" : 331,
"value" : 100
},
{
"_id" : 332,
"value" : 100
},
{
"_id" : 333,
"value" : 100
},
{
"_id" : 334,
"value" : 100
},
{
"_id" : 335,
"value" : 100
},
{
"_id" : 336,
"value" : 100
},
{
"_id" : 337,
"value" : 100
},
{
"_id" : 338,
"value" : 100
},
{
"_id" : 339,
"value" : 100
},
{
"_id" : 340,
"value" : 100
},
{
"_id" : 341,
"value" : 100
},
{
"_id" : 342,
"value" : 100
},
{
"_id" : 343,
"value" : 100
},
{
"_id" : 344,
"value" : 100
},
{
"_id" : 345,
"value" : 100
},
{
"_id" : 346,
"value" : 100
},
{
"_id" : 347,
"value" : 100
},
{
"_id" : 348,
"value" : 100
},
{
"_id" : 349,
"value" : 100
},
{
"_id" : 350,
"value" : 100
},
{
"_id" : 351,
"value" : 100
},
{
"_id" : 352,
"value" : 100
},
{
"_id" : 353,
"value" : 100
},
{
"_id" : 354,
"value" : 100
},
{
"_id" : 355,
"value" : 100
},
{
"_id" : 356,
"value" : 100
},
{
"_id" : 357,
"value" : 100
},
{
"_id" : 358,
"value" : 100
},
{
"_id" : 359,
"value" : 100
},
{
"_id" : 360,
"value" : 100
},
{
"_id" : 361,
"value" : 100
},
{
"_id" : 362,
"value" : 100
},
{
"_id" : 363,
"value" : 100
},
{
"_id" : 364,
"value" : 100
},
{
"_id" : 365,
"value" : 100
},
{
"_id" : 366,
"value" : 100
},
{
"_id" : 367,
"value" : 100
},
{
"_id" : 368,
"value" : 100
},
{
"_id" : 369,
"value" : 100
},
{
"_id" : 370,
"value" : 100
},
{
"_id" : 371,
"value" : 100
},
{
"_id" : 372,
"value" : 100
},
{
"_id" : 373,
"value" : 100
},
{
"_id" : 374,
"value" : 100
},
{
"_id" : 375,
"value" : 100
},
{
"_id" : 376,
"value" : 100
},
{
"_id" : 377,
"value" : 100
},
{
"_id" : 378,
"value" : 100
},
{
"_id" : 379,
"value" : 100
},
{
"_id" : 380,
"value" : 100
},
{
"_id" : 381,
"value" : 100
},
{
"_id" : 382,
"value" : 100
},
{
"_id" : 383,
"value" : 100
},
{
"_id" : 384,
"value" : 100
},
{
"_id" : 385,
"value" : 100
},
{
"_id" : 386,
"value" : 100
},
{
"_id" : 387,
"value" : 100
},
{
"_id" : 388,
"value" : 100
},
{
"_id" : 389,
"value" : 100
},
{
"_id" : 390,
"value" : 100
},
{
"_id" : 391,
"value" : 100
},
{
"_id" : 392,
"value" : 100
},
{
"_id" : 393,
"value" : 100
},
{
"_id" : 394,
"value" : 100
},
{
"_id" : 395,
"value" : 100
},
{
"_id" : 396,
"value" : 100
},
{
"_id" : 397,
"value" : 100
},
{
"_id" : 398,
"value" : 100
},
{
"_id" : 399,
"value" : 100
},
{
"_id" : 400,
"value" : 100
},
{
"_id" : 401,
"value" : 100
},
{
"_id" : 402,
"value" : 100
},
{
"_id" : 403,
"value" : 100
},
{
"_id" : 404,
"value" : 100
},
{
"_id" : 405,
"value" : 100
},
{
"_id" : 406,
"value" : 100
},
{
"_id" : 407,
"value" : 100
},
{
"_id" : 408,
"value" : 100
},
{
"_id" : 409,
"value" : 100
},
{
"_id" : 410,
"value" : 100
},
{
"_id" : 411,
"value" : 100
},
{
"_id" : 412,
"value" : 100
},
{
"_id" : 413,
"value" : 100
},
{
"_id" : 414,
"value" : 100
},
{
"_id" : 415,
"value" : 100
},
{
"_id" : 416,
"value" : 100
},
{
"_id" : 417,
"value" : 100
},
{
"_id" : 418,
"value" : 100
},
{
"_id" : 419,
"value" : 100
},
{
"_id" : 420,
"value" : 100
},
{
"_id" : 421,
"value" : 100
},
{
"_id" : 422,
"value" : 100
},
{
"_id" : 423,
"value" : 100
},
{
"_id" : 424,
"value" : 100
},
{
"_id" : 425,
"value" : 100
},
{
"_id" : 426,
"value" : 100
},
{
"_id" : 427,
"value" : 100
},
{
"_id" : 428,
"value" : 100
},
{
"_id" : 429,
"value" : 100
},
{
"_id" : 430,
"value" : 100
},
{
"_id" : 431,
"value" : 100
},
{
"_id" : 432,
"value" : 100
},
{
"_id" : 433,
"value" : 100
},
{
"_id" : 434,
"value" : 100
},
{
"_id" : 435,
"value" : 100
},
{
"_id" : 436,
"value" : 100
},
{
"_id" : 437,
"value" : 100
},
{
"_id" : 438,
"value" : 100
},
{
"_id" : 439,
"value" : 100
},
{
"_id" : 440,
"value" : 100
},
{
"_id" : 441,
"value" : 100
},
{
"_id" : 442,
"value" : 100
},
{
"_id" : 443,
"value" : 100
},
{
"_id" : 444,
"value" : 100
},
{
"_id" : 445,
"value" : 100
},
{
"_id" : 446,
"value" : 100
},
{
"_id" : 447,
"value" : 100
},
{
"_id" : 448,
"value" : 100
},
{
"_id" : 449,
"value" : 100
},
{
"_id" : 450,
"value" : 100
},
{
"_id" : 451,
"value" : 100
},
{
"_id" : 452,
"value" : 100
},
{
"_id" : 453,
"value" : 100
},
{
"_id" : 454,
"value" : 100
},
{
"_id" : 455,
"value" : 100
},
{
"_id" : 456,
"value" : 100
},
{
"_id" : 457,
"value" : 100
},
{
"_id" : 458,
"value" : 100
},
{
"_id" : 459,
"value" : 100
},
{
"_id" : 460,
"value" : 100
},
{
"_id" : 461,
"value" : 100
},
{
"_id" : 462,
"value" : 100
},
{
"_id" : 463,
"value" : 100
},
{
"_id" : 464,
"value" : 100
},
{
"_id" : 465,
"value" : 100
},
{
"_id" : 466,
"value" : 100
},
{
"_id" : 467,
"value" : 100
},
{
"_id" : 468,
"value" : 100
},
{
"_id" : 469,
"value" : 100
},
{
"_id" : 470,
"value" : 100
},
{
"_id" : 471,
"value" : 100
},
{
"_id" : 472,
"value" : 100
},
{
"_id" : 473,
"value" : 100
},
{
"_id" : 474,
"value" : 100
},
{
"_id" : 475,
"value" : 100
},
{
"_id" : 476,
"value" : 100
},
{
"_id" : 477,
"value" : 100
},
{
"_id" : 478,
"value" : 100
},
{
"_id" : 479,
"value" : 100
},
{
"_id" : 480,
"value" : 100
},
{
"_id" : 481,
"value" : 100
},
{
"_id" : 482,
"value" : 100
},
{
"_id" : 483,
"value" : 100
},
{
"_id" : 484,
"value" : 100
},
{
"_id" : 485,
"value" : 100
},
{
"_id" : 486,
"value" : 100
},
{
"_id" : 487,
"value" : 100
},
{
"_id" : 488,
"value" : 100
},
{
"_id" : 489,
"value" : 100
},
{
"_id" : 490,
"value" : 100
},
{
"_id" : 491,
"value" : 100
},
{
"_id" : 492,
"value" : 100
},
{
"_id" : 493,
"value" : 100
},
{
"_id" : 494,
"value" : 100
},
{
"_id" : 495,
"value" : 100
},
{
"_id" : 496,
"value" : 100
},
{
"_id" : 497,
"value" : 100
},
{
"_id" : 498,
"value" : 100
},
{
"_id" : 499,
"value" : 100
},
{
"_id" : 500,
"value" : 100
},
{
"_id" : 501,
"value" : 100
},
{
"_id" : 502,
"value" : 100
},
{
"_id" : 503,
"value" : 100
},
{
"_id" : 504,
"value" : 100
},
{
"_id" : 505,
"value" : 100
},
{
"_id" : 506,
"value" : 100
},
{
"_id" : 507,
"value" : 100
},
{
"_id" : 508,
"value" : 100
},
{
"_id" : 509,
"value" : 100
},
{
"_id" : 510,
"value" : 100
},
{
"_id" : 511,
"value" : 100
}
],
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1159,
"timing" : {
"shardProcessing" : 1117,
"postProcessing" : 42
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:28 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_12_inc
m30000| Thu Jun 14 01:41:28 [conn7] build index mrShard.tmp.mr.srcSharded_12_inc { 0: 1 }
m30000| Thu Jun 14 01:41:28 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:28 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_12
m30000| Thu Jun 14 01:41:28 [conn7] build index mrShard.tmp.mr.srcSharded_12 { _id: 1 }
m30000| Thu Jun 14 01:41:28 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShard.tmp.mrs.srcSharded_1339652488_16
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_12
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_12
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShard.tmp.mr.srcSharded_12_inc
m30000| Thu Jun 14 01:41:29 [conn7] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30000| emit(this.i, 1);
m30000| }, reduce: function reduce(key, values) {
m30000| return Array.sum(values);
m30000| }, out: "tmp.mrs.srcSharded_1339652488_16", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 641 locks(micros) W:24377 r:3482583 w:898789 reslen:172 377ms
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_22_inc
m30001| Thu Jun 14 01:41:28 [conn3] build index mrShard.tmp.mr.srcSharded_22_inc { 0: 1 }
m30001| Thu Jun 14 01:41:28 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:28 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_22
m30001| Thu Jun 14 01:41:28 [conn3] build index mrShard.tmp.mr.srcSharded_22 { _id: 1 }
m30001| Thu Jun 14 01:41:28 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mrs.srcSharded_1339652488_16
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_22
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_22
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mr.srcSharded_22_inc
m30001| Thu Jun 14 01:41:29 [conn3] command mrShard.$cmd command: { mapreduce: "srcSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcSharded_1339652488_16", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 896 locks(micros) W:47879 r:16824694 w:2770367 reslen:172 1043ms
m30999| Thu Jun 14 01:41:29 [conn] MR with sharded output, NS=mrShardOtherDB.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [conn] enable sharding on: mrShardOtherDB.mrReplaceInShardedOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:29 [conn] going to create 1 chunk(s) for: mrShardOtherDB.mrReplaceInShardedOutSharded using new epoch 4fd97989607081b222f4029d
m30000| Thu Jun 14 01:41:29 [conn10] build index mrShardOtherDB.mrReplaceInShardedOutSharded { _id: 1 }
m30000| Thu Jun 14 01:41:29 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:29 [conn10] info: creating collection mrShardOtherDB.mrReplaceInShardedOutSharded on add index
m30999| Thu Jun 14 01:41:29 [conn] ChunkManager: time to load chunks for mrShardOtherDB.mrReplaceInShardedOutSharded: 0ms sequenceNumber: 12 version: 1|0||4fd97989607081b222f4029d based on: (empty)
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion shard0000 localhost:30000 mrShardOtherDB.mrReplaceInShardedOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97989607081b222f4029d'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShardOtherDB.mrReplaceInShardedOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShardOtherDB.mrReplaceInShardedOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion shard0000 localhost:30000 mrShardOtherDB.mrReplaceInShardedOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97989607081b222f4029d'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30000| Thu Jun 14 01:41:29 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:29 [conn] resetting shard version of mrShardOtherDB.mrReplaceInShardedOutSharded on localhost:30001, version is zero
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion shard0001 localhost:30001 mrShardOtherDB.mrReplaceInShardedOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceInShardedOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:29 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:29 [conn] created new distributed lock for mrShardOtherDB.mrReplaceInShardedOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:29 [conn] inserting initial doc in config.locks for lock mrShardOtherDB.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [conn] about to acquire distributed lock 'mrShardOtherDB.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:29 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97989607081b222f4029e" } }
m30999| { "_id" : "mrShardOtherDB.mrReplaceInShardedOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:29 [conn] distributed lock 'mrShardOtherDB.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97989607081b222f4029e
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_13
m30000| Thu Jun 14 01:41:29 [conn7] build index mrShardOtherDB.tmp.mr.srcSharded_13 { _id: 1 }
m30000| Thu Jun 14 01:41:29 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShardOtherDB.mrReplaceInShardedOutSharded
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_13
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_13
m30000| Thu Jun 14 01:41:29 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcSharded_13
m30999| Thu Jun 14 01:41:29 [conn] distributed lock 'mrShardOtherDB.mrReplaceInShardedOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30000| Thu Jun 14 01:41:29 [conn6] CMD: drop mrShard.tmp.mrs.srcSharded_1339652488_16
m30001| Thu Jun 14 01:41:29 [conn2] CMD: drop mrShard.tmp.mrs.srcSharded_1339652488_16
{
"result" : {
"db" : "mrShardOtherDB",
"collection" : "mrReplaceInShardedOutSharded"
},
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(6144),
"output" : NumberLong(512)
},
"timeMillis" : 1079,
"timing" : {
"shardProcessing" : 1043,
"postProcessing" : 35
},
"shardCounts" : {
"localhost:30000" : {
"input" : 12896,
"emit" : 12896,
"reduce" : 1536,
"output" : 512
},
"localhost:30001" : {
"input" : 38304,
"emit" : 38304,
"reduce" : 4096,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30000" : {
"input" : NumberLong(1024),
"reduce" : NumberLong(512),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_23_inc
m30001| Thu Jun 14 01:41:29 [conn3] build index mrShard.tmp.mr.srcNonSharded_23_inc { 0: 1 }
m30001| Thu Jun 14 01:41:29 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:29 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_23
m30001| Thu Jun 14 01:41:29 [conn3] build index mrShard.tmp.mr.srcNonSharded_23 { _id: 1 }
m30001| Thu Jun 14 01:41:29 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:29 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:41:29 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:29 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97989607081b222f4029f" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd9797f607081b222f40294" } }
m30999| Thu Jun 14 01:41:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97989607081b222f4029f
m30999| Thu Jun 14 01:41:29 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.srcSharded-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: MinKey }, max: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796d56cc70fc67ed6799')", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796d56cc70fc67ed6799') }, max: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9796f56cc70fc67ed99f9')", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9796f56cc70fc67ed99f9') }, max: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.srcSharded-_id_ObjectId('4fd9797256cc70fc67ee03ae')", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97967607081b222f40291'), ns: "mrShard.srcSharded", min: { _id: ObjectId('4fd9797256cc70fc67ee03ae') }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShard.srcSharded
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 2 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 2 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.mrReplaceInShardedOutSharded-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97985607081b222f40295'), ns: "mrShard.mrReplaceInShardedOutSharded", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShard.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 1 chunks on shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.mrMergeInShardedOutSharded-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97986607081b222f40297'), ns: "mrShard.mrMergeInShardedOutSharded", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShard.mrMergeInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 1 chunks on shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.mrReduceInShardedOutSharded-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97987607081b222f40299'), ns: "mrShard.mrReduceInShardedOutSharded", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShard.mrReduceInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 1 chunks on shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShard.-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97988607081b222f4029b'), ns: "mrShard.", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShard.
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 1 chunks on shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:29 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:29 [Balancer] shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] { _id: "mrShardOtherDB.mrReplaceInShardedOutSharded-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97989607081b222f4029d'), ns: "mrShardOtherDB.mrReplaceInShardedOutSharded", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:29 [Balancer] shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] ----
m30999| Thu Jun 14 01:41:29 [Balancer] collection : mrShardOtherDB.mrReplaceInShardedOutSharded
m30999| Thu Jun 14 01:41:29 [Balancer] donor : 1 chunks on shard0000
m30999| Thu Jun 14 01:41:29 [Balancer] receiver : 0 chunks on shard0001
m30999| Thu Jun 14 01:41:29 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:41:29 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:41:29 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652489_17
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_23
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_23
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_23_inc
m30001| Thu Jun 14 01:41:31 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652489_17", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:49886 r:18003737 w:2784865 reslen:175 1367ms
m30999| Thu Jun 14 01:41:31 [conn] MR with sharded output, NS=mrShard.mrReplaceOutSharded
m30999| Thu Jun 14 01:41:31 [conn] enable sharding on: mrShard.mrReplaceOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:31 [conn] going to create 1 chunk(s) for: mrShard.mrReplaceOutSharded using new epoch 4fd9798b607081b222f402a0
m30001| Thu Jun 14 01:41:31 [conn2] build index mrShard.mrReplaceOutSharded { _id: 1 }
m30001| Thu Jun 14 01:41:31 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:31 [conn2] info: creating collection mrShard.mrReplaceOutSharded on add index
m30999| Thu Jun 14 01:41:31 [conn] ChunkManager: time to load chunks for mrShard.mrReplaceOutSharded: 0ms sequenceNumber: 13 version: 1|0||4fd9798b607081b222f402a0 based on: (empty)
m30999| Thu Jun 14 01:41:31 [conn] resetting shard version of mrShard.mrReplaceOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrReplaceOutSharded { setShardVersion: "mrShard.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReplaceOutSharded { setShardVersion: "mrShard.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798b607081b222f402a0'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrReplaceOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrReplaceOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReplaceOutSharded { setShardVersion: "mrShard.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798b607081b222f402a0'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:31 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:31 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:31 [conn] created new distributed lock for mrShard.mrReplaceOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:31 [conn] inserting initial doc in config.locks for lock mrShard.mrReplaceOutSharded
m30999| Thu Jun 14 01:41:31 [conn] about to acquire distributed lock 'mrShard.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:31 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd9798b607081b222f402a1" } }
m30999| { "_id" : "mrShard.mrReplaceOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:31 [conn] distributed lock 'mrShard.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd9798b607081b222f402a1
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_24
m30001| Thu Jun 14 01:41:31 [conn3] build index mrShard.tmp.mr.srcNonSharded_24 { _id: 1 }
m30001| Thu Jun 14 01:41:31 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.mrReplaceOutSharded
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_24
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_24
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_24
m30999| Thu Jun 14 01:41:31 [conn] distributed lock 'mrShard.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:31 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652489_17
{
"result" : "mrReplaceOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 1420,
"timing" : {
"shardProcessing" : 1367,
"postProcessing" : 52
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_25_inc
m30001| Thu Jun 14 01:41:31 [conn3] build index mrShard.tmp.mr.srcNonSharded_25_inc { 0: 1 }
m30001| Thu Jun 14 01:41:31 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:31 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_25
m30001| Thu Jun 14 01:41:31 [conn3] build index mrShard.tmp.mr.srcNonSharded_25 { _id: 1 }
m30001| Thu Jun 14 01:41:31 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652491_18
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_25
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_25
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_25_inc
m30001| Thu Jun 14 01:41:32 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652491_18", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:54447 r:19064964 w:2808974 reslen:175 1248ms
m30999| Thu Jun 14 01:41:32 [conn] MR with sharded output, NS=mrShard.mrMergeOutSharded
m30999| Thu Jun 14 01:41:32 [conn] enable sharding on: mrShard.mrMergeOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:32 [conn] going to create 1 chunk(s) for: mrShard.mrMergeOutSharded using new epoch 4fd9798c607081b222f402a2
m30001| Thu Jun 14 01:41:32 [conn2] build index mrShard.mrMergeOutSharded { _id: 1 }
m30001| Thu Jun 14 01:41:32 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:32 [conn2] info: creating collection mrShard.mrMergeOutSharded on add index
m30999| Thu Jun 14 01:41:32 [conn] ChunkManager: time to load chunks for mrShard.mrMergeOutSharded: 0ms sequenceNumber: 14 version: 1|0||4fd9798c607081b222f402a2 based on: (empty)
m30999| Thu Jun 14 01:41:32 [conn] resetting shard version of mrShard.mrMergeOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrMergeOutSharded { setShardVersion: "mrShard.mrMergeOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrMergeOutSharded { setShardVersion: "mrShard.mrMergeOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798c607081b222f402a2'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrMergeOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrMergeOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrMergeOutSharded { setShardVersion: "mrShard.mrMergeOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798c607081b222f402a2'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:32 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:32 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:32 [conn] created new distributed lock for mrShard.mrMergeOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:32 [conn] inserting initial doc in config.locks for lock mrShard.mrMergeOutSharded
m30999| Thu Jun 14 01:41:32 [conn] about to acquire distributed lock 'mrShard.mrMergeOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:32 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd9798c607081b222f402a3" } }
m30999| { "_id" : "mrShard.mrMergeOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:32 [conn] distributed lock 'mrShard.mrMergeOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd9798c607081b222f402a3
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_26
m30001| Thu Jun 14 01:41:32 [conn3] build index mrShard.tmp.mr.srcNonSharded_26 { _id: 1 }
m30001| Thu Jun 14 01:41:32 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.mrMergeOutSharded
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_26
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_26
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_26
m30999| Thu Jun 14 01:41:32 [conn] distributed lock 'mrShard.mrMergeOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:32 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652491_18
{
"result" : "mrMergeOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 1272,
"timing" : {
"shardProcessing" : 1249,
"postProcessing" : 23
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_27_inc
m30001| Thu Jun 14 01:41:32 [conn3] build index mrShard.tmp.mr.srcNonSharded_27_inc { 0: 1 }
m30001| Thu Jun 14 01:41:32 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:32 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_27
m30001| Thu Jun 14 01:41:32 [conn3] build index mrShard.tmp.mr.srcNonSharded_27 { _id: 1 }
m30001| Thu Jun 14 01:41:32 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652492_19
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_27
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_27
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_27_inc
m30001| Thu Jun 14 01:41:33 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652492_19", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:59268 r:20173829 w:2833418 reslen:175 1296ms
m30999| Thu Jun 14 01:41:33 [conn] MR with sharded output, NS=mrShard.mrReduceOutSharded
m30999| Thu Jun 14 01:41:33 [conn] enable sharding on: mrShard.mrReduceOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:33 [conn] going to create 1 chunk(s) for: mrShard.mrReduceOutSharded using new epoch 4fd9798d607081b222f402a4
m30001| Thu Jun 14 01:41:33 [conn2] build index mrShard.mrReduceOutSharded { _id: 1 }
m30999| Thu Jun 14 01:41:33 [conn] ChunkManager: time to load chunks for mrShard.mrReduceOutSharded: 0ms sequenceNumber: 15 version: 1|0||4fd9798d607081b222f402a4 based on: (empty)
m30999| Thu Jun 14 01:41:33 [conn] resetting shard version of mrShard.mrReduceOutSharded on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion shard0000 localhost:30000 mrShard.mrReduceOutSharded { setShardVersion: "mrShard.mrReduceOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReduceOutSharded { setShardVersion: "mrShard.mrReduceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798d607081b222f402a4'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:33 [conn2] build index done. scanned 0 total records. 0.012 secs
m30001| Thu Jun 14 01:41:33 [conn2] info: creating collection mrShard.mrReduceOutSharded on add index
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShard.mrReduceOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShard.mrReduceOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion shard0001 localhost:30001 mrShard.mrReduceOutSharded { setShardVersion: "mrShard.mrReduceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd9798d607081b222f402a4'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30001| Thu Jun 14 01:41:33 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:33 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:33 [conn] created new distributed lock for mrShard.mrReduceOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:33 [conn] inserting initial doc in config.locks for lock mrShard.mrReduceOutSharded
m30999| Thu Jun 14 01:41:33 [conn] about to acquire distributed lock 'mrShard.mrReduceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:33 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd9798d607081b222f402a5" } }
m30999| { "_id" : "mrShard.mrReduceOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:33 [conn] distributed lock 'mrShard.mrReduceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd9798d607081b222f402a5
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_28
m30001| Thu Jun 14 01:41:33 [conn3] build index mrShard.tmp.mr.srcNonSharded_28 { _id: 1 }
m30001| Thu Jun 14 01:41:33 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.mrReduceOutSharded
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_28
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_28
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_28
m30999| Thu Jun 14 01:41:33 [conn] distributed lock 'mrShard.mrReduceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:33 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652492_19
{
"result" : "mrReduceOutSharded",
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 1330,
"timing" : {
"shardProcessing" : 1297,
"postProcessing" : 33
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_29_inc
m30001| Thu Jun 14 01:41:33 [conn3] build index mrShard.tmp.mr.srcNonSharded_29_inc { 0: 1 }
m30001| Thu Jun 14 01:41:33 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:33 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_29
m30001| Thu Jun 14 01:41:33 [conn3] build index mrShard.tmp.mr.srcNonSharded_29 { _id: 1 }
m30001| Thu Jun 14 01:41:33 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652493_20
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_29
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_29
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_29_inc
m30001| Thu Jun 14 01:41:35 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652493_20", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:63839 r:21287661 w:2857531 reslen:175 1301ms
m30999| Thu Jun 14 01:41:35 [conn] MR with sharded output, NS=mrShard.
m30999| Thu Jun 14 01:41:35 [conn] created new distributed lock for mrShard. on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:35 [conn] about to acquire distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:35 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd9798f607081b222f402a6" } }
m30999| { "_id" : "mrShard.",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97988607081b222f4029c" } }
m30999| Thu Jun 14 01:41:35 [conn] distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd9798f607081b222f402a6
m30999| Thu Jun 14 01:41:35 [conn] distributed lock 'mrShard./domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:35 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652493_20
{
"results" : [
{
"_id" : 0,
"value" : 100
},
{
"_id" : 1,
"value" : 100
},
{
"_id" : 2,
"value" : 100
},
{
"_id" : 3,
"value" : 100
},
{
"_id" : 4,
"value" : 100
},
{
"_id" : 5,
"value" : 100
},
{
"_id" : 6,
"value" : 100
},
{
"_id" : 7,
"value" : 100
},
{
"_id" : 8,
"value" : 100
},
{
"_id" : 9,
"value" : 100
},
{
"_id" : 10,
"value" : 100
},
{
"_id" : 11,
"value" : 100
},
{
"_id" : 12,
"value" : 100
},
{
"_id" : 13,
"value" : 100
},
{
"_id" : 14,
"value" : 100
},
{
"_id" : 15,
"value" : 100
},
{
"_id" : 16,
"value" : 100
},
{
"_id" : 17,
"value" : 100
},
{
"_id" : 18,
"value" : 100
},
{
"_id" : 19,
"value" : 100
},
{
"_id" : 20,
"value" : 100
},
{
"_id" : 21,
"value" : 100
},
{
"_id" : 22,
"value" : 100
},
{
"_id" : 23,
"value" : 100
},
{
"_id" : 24,
"value" : 100
},
{
"_id" : 25,
"value" : 100
},
{
"_id" : 26,
"value" : 100
},
{
"_id" : 27,
"value" : 100
},
{
"_id" : 28,
"value" : 100
},
{
"_id" : 29,
"value" : 100
},
{
"_id" : 30,
"value" : 100
},
{
"_id" : 31,
"value" : 100
},
{
"_id" : 32,
"value" : 100
},
{
"_id" : 33,
"value" : 100
},
{
"_id" : 34,
"value" : 100
},
{
"_id" : 35,
"value" : 100
},
{
"_id" : 36,
"value" : 100
},
{
"_id" : 37,
"value" : 100
},
{
"_id" : 38,
"value" : 100
},
{
"_id" : 39,
"value" : 100
},
{
"_id" : 40,
"value" : 100
},
{
"_id" : 41,
"value" : 100
},
{
"_id" : 42,
"value" : 100
},
{
"_id" : 43,
"value" : 100
},
{
"_id" : 44,
"value" : 100
},
{
"_id" : 45,
"value" : 100
},
{
"_id" : 46,
"value" : 100
},
{
"_id" : 47,
"value" : 100
},
{
"_id" : 48,
"value" : 100
},
{
"_id" : 49,
"value" : 100
},
{
"_id" : 50,
"value" : 100
},
{
"_id" : 51,
"value" : 100
},
{
"_id" : 52,
"value" : 100
},
{
"_id" : 53,
"value" : 100
},
{
"_id" : 54,
"value" : 100
},
{
"_id" : 55,
"value" : 100
},
{
"_id" : 56,
"value" : 100
},
{
"_id" : 57,
"value" : 100
},
{
"_id" : 58,
"value" : 100
},
{
"_id" : 59,
"value" : 100
},
{
"_id" : 60,
"value" : 100
},
{
"_id" : 61,
"value" : 100
},
{
"_id" : 62,
"value" : 100
},
{
"_id" : 63,
"value" : 100
},
{
"_id" : 64,
"value" : 100
},
{
"_id" : 65,
"value" : 100
},
{
"_id" : 66,
"value" : 100
},
{
"_id" : 67,
"value" : 100
},
{
"_id" : 68,
"value" : 100
},
{
"_id" : 69,
"value" : 100
},
{
"_id" : 70,
"value" : 100
},
{
"_id" : 71,
"value" : 100
},
{
"_id" : 72,
"value" : 100
},
{
"_id" : 73,
"value" : 100
},
{
"_id" : 74,
"value" : 100
},
{
"_id" : 75,
"value" : 100
},
{
"_id" : 76,
"value" : 100
},
{
"_id" : 77,
"value" : 100
},
{
"_id" : 78,
"value" : 100
},
{
"_id" : 79,
"value" : 100
},
{
"_id" : 80,
"value" : 100
},
{
"_id" : 81,
"value" : 100
},
{
"_id" : 82,
"value" : 100
},
{
"_id" : 83,
"value" : 100
},
{
"_id" : 84,
"value" : 100
},
{
"_id" : 85,
"value" : 100
},
{
"_id" : 86,
"value" : 100
},
{
"_id" : 87,
"value" : 100
},
{
"_id" : 88,
"value" : 100
},
{
"_id" : 89,
"value" : 100
},
{
"_id" : 90,
"value" : 100
},
{
"_id" : 91,
"value" : 100
},
{
"_id" : 92,
"value" : 100
},
{
"_id" : 93,
"value" : 100
},
{
"_id" : 94,
"value" : 100
},
{
"_id" : 95,
"value" : 100
},
{
"_id" : 96,
"value" : 100
},
{
"_id" : 97,
"value" : 100
},
{
"_id" : 98,
"value" : 100
},
{
"_id" : 99,
"value" : 100
},
{
"_id" : 100,
"value" : 100
},
{
"_id" : 101,
"value" : 100
},
{
"_id" : 102,
"value" : 100
},
{
"_id" : 103,
"value" : 100
},
{
"_id" : 104,
"value" : 100
},
{
"_id" : 105,
"value" : 100
},
{
"_id" : 106,
"value" : 100
},
{
"_id" : 107,
"value" : 100
},
{
"_id" : 108,
"value" : 100
},
{
"_id" : 109,
"value" : 100
},
{
"_id" : 110,
"value" : 100
},
{
"_id" : 111,
"value" : 100
},
{
"_id" : 112,
"value" : 100
},
{
"_id" : 113,
"value" : 100
},
{
"_id" : 114,
"value" : 100
},
{
"_id" : 115,
"value" : 100
},
{
"_id" : 116,
"value" : 100
},
{
"_id" : 117,
"value" : 100
},
{
"_id" : 118,
"value" : 100
},
{
"_id" : 119,
"value" : 100
},
{
"_id" : 120,
"value" : 100
},
{
"_id" : 121,
"value" : 100
},
{
"_id" : 122,
"value" : 100
},
{
"_id" : 123,
"value" : 100
},
{
"_id" : 124,
"value" : 100
},
{
"_id" : 125,
"value" : 100
},
{
"_id" : 126,
"value" : 100
},
{
"_id" : 127,
"value" : 100
},
{
"_id" : 128,
"value" : 100
},
{
"_id" : 129,
"value" : 100
},
{
"_id" : 130,
"value" : 100
},
{
"_id" : 131,
"value" : 100
},
{
"_id" : 132,
"value" : 100
},
{
"_id" : 133,
"value" : 100
},
{
"_id" : 134,
"value" : 100
},
{
"_id" : 135,
"value" : 100
},
{
"_id" : 136,
"value" : 100
},
{
"_id" : 137,
"value" : 100
},
{
"_id" : 138,
"value" : 100
},
{
"_id" : 139,
"value" : 100
},
{
"_id" : 140,
"value" : 100
},
{
"_id" : 141,
"value" : 100
},
{
"_id" : 142,
"value" : 100
},
{
"_id" : 143,
"value" : 100
},
{
"_id" : 144,
"value" : 100
},
{
"_id" : 145,
"value" : 100
},
{
"_id" : 146,
"value" : 100
},
{
"_id" : 147,
"value" : 100
},
{
"_id" : 148,
"value" : 100
},
{
"_id" : 149,
"value" : 100
},
{
"_id" : 150,
"value" : 100
},
{
"_id" : 151,
"value" : 100
},
{
"_id" : 152,
"value" : 100
},
{
"_id" : 153,
"value" : 100
},
{
"_id" : 154,
"value" : 100
},
{
"_id" : 155,
"value" : 100
},
{
"_id" : 156,
"value" : 100
},
{
"_id" : 157,
"value" : 100
},
{
"_id" : 158,
"value" : 100
},
{
"_id" : 159,
"value" : 100
},
{
"_id" : 160,
"value" : 100
},
{
"_id" : 161,
"value" : 100
},
{
"_id" : 162,
"value" : 100
},
{
"_id" : 163,
"value" : 100
},
{
"_id" : 164,
"value" : 100
},
{
"_id" : 165,
"value" : 100
},
{
"_id" : 166,
"value" : 100
},
{
"_id" : 167,
"value" : 100
},
{
"_id" : 168,
"value" : 100
},
{
"_id" : 169,
"value" : 100
},
{
"_id" : 170,
"value" : 100
},
{
"_id" : 171,
"value" : 100
},
{
"_id" : 172,
"value" : 100
},
{
"_id" : 173,
"value" : 100
},
{
"_id" : 174,
"value" : 100
},
{
"_id" : 175,
"value" : 100
},
{
"_id" : 176,
"value" : 100
},
{
"_id" : 177,
"value" : 100
},
{
"_id" : 178,
"value" : 100
},
{
"_id" : 179,
"value" : 100
},
{
"_id" : 180,
"value" : 100
},
{
"_id" : 181,
"value" : 100
},
{
"_id" : 182,
"value" : 100
},
{
"_id" : 183,
"value" : 100
},
{
"_id" : 184,
"value" : 100
},
{
"_id" : 185,
"value" : 100
},
{
"_id" : 186,
"value" : 100
},
{
"_id" : 187,
"value" : 100
},
{
"_id" : 188,
"value" : 100
},
{
"_id" : 189,
"value" : 100
},
{
"_id" : 190,
"value" : 100
},
{
"_id" : 191,
"value" : 100
},
{
"_id" : 192,
"value" : 100
},
{
"_id" : 193,
"value" : 100
},
{
"_id" : 194,
"value" : 100
},
{
"_id" : 195,
"value" : 100
},
{
"_id" : 196,
"value" : 100
},
{
"_id" : 197,
"value" : 100
},
{
"_id" : 198,
"value" : 100
},
{
"_id" : 199,
"value" : 100
},
{
"_id" : 200,
"value" : 100
},
{
"_id" : 201,
"value" : 100
},
{
"_id" : 202,
"value" : 100
},
{
"_id" : 203,
"value" : 100
},
{
"_id" : 204,
"value" : 100
},
{
"_id" : 205,
"value" : 100
},
{
"_id" : 206,
"value" : 100
},
{
"_id" : 207,
"value" : 100
},
{
"_id" : 208,
"value" : 100
},
{
"_id" : 209,
"value" : 100
},
{
"_id" : 210,
"value" : 100
},
{
"_id" : 211,
"value" : 100
},
{
"_id" : 212,
"value" : 100
},
{
"_id" : 213,
"value" : 100
},
{
"_id" : 214,
"value" : 100
},
{
"_id" : 215,
"value" : 100
},
{
"_id" : 216,
"value" : 100
},
{
"_id" : 217,
"value" : 100
},
{
"_id" : 218,
"value" : 100
},
{
"_id" : 219,
"value" : 100
},
{
"_id" : 220,
"value" : 100
},
{
"_id" : 221,
"value" : 100
},
{
"_id" : 222,
"value" : 100
},
{
"_id" : 223,
"value" : 100
},
{
"_id" : 224,
"value" : 100
},
{
"_id" : 225,
"value" : 100
},
{
"_id" : 226,
"value" : 100
},
{
"_id" : 227,
"value" : 100
},
{
"_id" : 228,
"value" : 100
},
{
"_id" : 229,
"value" : 100
},
{
"_id" : 230,
"value" : 100
},
{
"_id" : 231,
"value" : 100
},
{
"_id" : 232,
"value" : 100
},
{
"_id" : 233,
"value" : 100
},
{
"_id" : 234,
"value" : 100
},
{
"_id" : 235,
"value" : 100
},
{
"_id" : 236,
"value" : 100
},
{
"_id" : 237,
"value" : 100
},
{
"_id" : 238,
"value" : 100
},
{
"_id" : 239,
"value" : 100
},
{
"_id" : 240,
"value" : 100
},
{
"_id" : 241,
"value" : 100
},
{
"_id" : 242,
"value" : 100
},
{
"_id" : 243,
"value" : 100
},
{
"_id" : 244,
"value" : 100
},
{
"_id" : 245,
"value" : 100
},
{
"_id" : 246,
"value" : 100
},
{
"_id" : 247,
"value" : 100
},
{
"_id" : 248,
"value" : 100
},
{
"_id" : 249,
"value" : 100
},
{
"_id" : 250,
"value" : 100
},
{
"_id" : 251,
"value" : 100
},
{
"_id" : 252,
"value" : 100
},
{
"_id" : 253,
"value" : 100
},
{
"_id" : 254,
"value" : 100
},
{
"_id" : 255,
"value" : 100
},
{
"_id" : 256,
"value" : 100
},
{
"_id" : 257,
"value" : 100
},
{
"_id" : 258,
"value" : 100
},
{
"_id" : 259,
"value" : 100
},
{
"_id" : 260,
"value" : 100
},
{
"_id" : 261,
"value" : 100
},
{
"_id" : 262,
"value" : 100
},
{
"_id" : 263,
"value" : 100
},
{
"_id" : 264,
"value" : 100
},
{
"_id" : 265,
"value" : 100
},
{
"_id" : 266,
"value" : 100
},
{
"_id" : 267,
"value" : 100
},
{
"_id" : 268,
"value" : 100
},
{
"_id" : 269,
"value" : 100
},
{
"_id" : 270,
"value" : 100
},
{
"_id" : 271,
"value" : 100
},
{
"_id" : 272,
"value" : 100
},
{
"_id" : 273,
"value" : 100
},
{
"_id" : 274,
"value" : 100
},
{
"_id" : 275,
"value" : 100
},
{
"_id" : 276,
"value" : 100
},
{
"_id" : 277,
"value" : 100
},
{
"_id" : 278,
"value" : 100
},
{
"_id" : 279,
"value" : 100
},
{
"_id" : 280,
"value" : 100
},
{
"_id" : 281,
"value" : 100
},
{
"_id" : 282,
"value" : 100
},
{
"_id" : 283,
"value" : 100
},
{
"_id" : 284,
"value" : 100
},
{
"_id" : 285,
"value" : 100
},
{
"_id" : 286,
"value" : 100
},
{
"_id" : 287,
"value" : 100
},
{
"_id" : 288,
"value" : 100
},
{
"_id" : 289,
"value" : 100
},
{
"_id" : 290,
"value" : 100
},
{
"_id" : 291,
"value" : 100
},
{
"_id" : 292,
"value" : 100
},
{
"_id" : 293,
"value" : 100
},
{
"_id" : 294,
"value" : 100
},
{
"_id" : 295,
"value" : 100
},
{
"_id" : 296,
"value" : 100
},
{
"_id" : 297,
"value" : 100
},
{
"_id" : 298,
"value" : 100
},
{
"_id" : 299,
"value" : 100
},
{
"_id" : 300,
"value" : 100
},
{
"_id" : 301,
"value" : 100
},
{
"_id" : 302,
"value" : 100
},
{
"_id" : 303,
"value" : 100
},
{
"_id" : 304,
"value" : 100
},
{
"_id" : 305,
"value" : 100
},
{
"_id" : 306,
"value" : 100
},
{
"_id" : 307,
"value" : 100
},
{
"_id" : 308,
"value" : 100
},
{
"_id" : 309,
"value" : 100
},
{
"_id" : 310,
"value" : 100
},
{
"_id" : 311,
"value" : 100
},
{
"_id" : 312,
"value" : 100
},
{
"_id" : 313,
"value" : 100
},
{
"_id" : 314,
"value" : 100
},
{
"_id" : 315,
"value" : 100
},
{
"_id" : 316,
"value" : 100
},
{
"_id" : 317,
"value" : 100
},
{
"_id" : 318,
"value" : 100
},
{
"_id" : 319,
"value" : 100
},
{
"_id" : 320,
"value" : 100
},
{
"_id" : 321,
"value" : 100
},
{
"_id" : 322,
"value" : 100
},
{
"_id" : 323,
"value" : 100
},
{
"_id" : 324,
"value" : 100
},
{
"_id" : 325,
"value" : 100
},
{
"_id" : 326,
"value" : 100
},
{
"_id" : 327,
"value" : 100
},
{
"_id" : 328,
"value" : 100
},
{
"_id" : 329,
"value" : 100
},
{
"_id" : 330,
"value" : 100
},
{
"_id" : 331,
"value" : 100
},
{
"_id" : 332,
"value" : 100
},
{
"_id" : 333,
"value" : 100
},
{
"_id" : 334,
"value" : 100
},
{
"_id" : 335,
"value" : 100
},
{
"_id" : 336,
"value" : 100
},
{
"_id" : 337,
"value" : 100
},
{
"_id" : 338,
"value" : 100
},
{
"_id" : 339,
"value" : 100
},
{
"_id" : 340,
"value" : 100
},
{
"_id" : 341,
"value" : 100
},
{
"_id" : 342,
"value" : 100
},
{
"_id" : 343,
"value" : 100
},
{
"_id" : 344,
"value" : 100
},
{
"_id" : 345,
"value" : 100
},
{
"_id" : 346,
"value" : 100
},
{
"_id" : 347,
"value" : 100
},
{
"_id" : 348,
"value" : 100
},
{
"_id" : 349,
"value" : 100
},
{
"_id" : 350,
"value" : 100
},
{
"_id" : 351,
"value" : 100
},
{
"_id" : 352,
"value" : 100
},
{
"_id" : 353,
"value" : 100
},
{
"_id" : 354,
"value" : 100
},
{
"_id" : 355,
"value" : 100
},
{
"_id" : 356,
"value" : 100
},
{
"_id" : 357,
"value" : 100
},
{
"_id" : 358,
"value" : 100
},
{
"_id" : 359,
"value" : 100
},
{
"_id" : 360,
"value" : 100
},
{
"_id" : 361,
"value" : 100
},
{
"_id" : 362,
"value" : 100
},
{
"_id" : 363,
"value" : 100
},
{
"_id" : 364,
"value" : 100
},
{
"_id" : 365,
"value" : 100
},
{
"_id" : 366,
"value" : 100
},
{
"_id" : 367,
"value" : 100
},
{
"_id" : 368,
"value" : 100
},
{
"_id" : 369,
"value" : 100
},
{
"_id" : 370,
"value" : 100
},
{
"_id" : 371,
"value" : 100
},
{
"_id" : 372,
"value" : 100
},
{
"_id" : 373,
"value" : 100
},
{
"_id" : 374,
"value" : 100
},
{
"_id" : 375,
"value" : 100
},
{
"_id" : 376,
"value" : 100
},
{
"_id" : 377,
"value" : 100
},
{
"_id" : 378,
"value" : 100
},
{
"_id" : 379,
"value" : 100
},
{
"_id" : 380,
"value" : 100
},
{
"_id" : 381,
"value" : 100
},
{
"_id" : 382,
"value" : 100
},
{
"_id" : 383,
"value" : 100
},
{
"_id" : 384,
"value" : 100
},
{
"_id" : 385,
"value" : 100
},
{
"_id" : 386,
"value" : 100
},
{
"_id" : 387,
"value" : 100
},
{
"_id" : 388,
"value" : 100
},
{
"_id" : 389,
"value" : 100
},
{
"_id" : 390,
"value" : 100
},
{
"_id" : 391,
"value" : 100
},
{
"_id" : 392,
"value" : 100
},
{
"_id" : 393,
"value" : 100
},
{
"_id" : 394,
"value" : 100
},
{
"_id" : 395,
"value" : 100
},
{
"_id" : 396,
"value" : 100
},
{
"_id" : 397,
"value" : 100
},
{
"_id" : 398,
"value" : 100
},
{
"_id" : 399,
"value" : 100
},
{
"_id" : 400,
"value" : 100
},
{
"_id" : 401,
"value" : 100
},
{
"_id" : 402,
"value" : 100
},
{
"_id" : 403,
"value" : 100
},
{
"_id" : 404,
"value" : 100
},
{
"_id" : 405,
"value" : 100
},
{
"_id" : 406,
"value" : 100
},
{
"_id" : 407,
"value" : 100
},
{
"_id" : 408,
"value" : 100
},
{
"_id" : 409,
"value" : 100
},
{
"_id" : 410,
"value" : 100
},
{
"_id" : 411,
"value" : 100
},
{
"_id" : 412,
"value" : 100
},
{
"_id" : 413,
"value" : 100
},
{
"_id" : 414,
"value" : 100
},
{
"_id" : 415,
"value" : 100
},
{
"_id" : 416,
"value" : 100
},
{
"_id" : 417,
"value" : 100
},
{
"_id" : 418,
"value" : 100
},
{
"_id" : 419,
"value" : 100
},
{
"_id" : 420,
"value" : 100
},
{
"_id" : 421,
"value" : 100
},
{
"_id" : 422,
"value" : 100
},
{
"_id" : 423,
"value" : 100
},
{
"_id" : 424,
"value" : 100
},
{
"_id" : 425,
"value" : 100
},
{
"_id" : 426,
"value" : 100
},
{
"_id" : 427,
"value" : 100
},
{
"_id" : 428,
"value" : 100
},
{
"_id" : 429,
"value" : 100
},
{
"_id" : 430,
"value" : 100
},
{
"_id" : 431,
"value" : 100
},
{
"_id" : 432,
"value" : 100
},
{
"_id" : 433,
"value" : 100
},
{
"_id" : 434,
"value" : 100
},
{
"_id" : 435,
"value" : 100
},
{
"_id" : 436,
"value" : 100
},
{
"_id" : 437,
"value" : 100
},
{
"_id" : 438,
"value" : 100
},
{
"_id" : 439,
"value" : 100
},
{
"_id" : 440,
"value" : 100
},
{
"_id" : 441,
"value" : 100
},
{
"_id" : 442,
"value" : 100
},
{
"_id" : 443,
"value" : 100
},
{
"_id" : 444,
"value" : 100
},
{
"_id" : 445,
"value" : 100
},
{
"_id" : 446,
"value" : 100
},
{
"_id" : 447,
"value" : 100
},
{
"_id" : 448,
"value" : 100
},
{
"_id" : 449,
"value" : 100
},
{
"_id" : 450,
"value" : 100
},
{
"_id" : 451,
"value" : 100
},
{
"_id" : 452,
"value" : 100
},
{
"_id" : 453,
"value" : 100
},
{
"_id" : 454,
"value" : 100
},
{
"_id" : 455,
"value" : 100
},
{
"_id" : 456,
"value" : 100
},
{
"_id" : 457,
"value" : 100
},
{
"_id" : 458,
"value" : 100
},
{
"_id" : 459,
"value" : 100
},
{
"_id" : 460,
"value" : 100
},
{
"_id" : 461,
"value" : 100
},
{
"_id" : 462,
"value" : 100
},
{
"_id" : 463,
"value" : 100
},
{
"_id" : 464,
"value" : 100
},
{
"_id" : 465,
"value" : 100
},
{
"_id" : 466,
"value" : 100
},
{
"_id" : 467,
"value" : 100
},
{
"_id" : 468,
"value" : 100
},
{
"_id" : 469,
"value" : 100
},
{
"_id" : 470,
"value" : 100
},
{
"_id" : 471,
"value" : 100
},
{
"_id" : 472,
"value" : 100
},
{
"_id" : 473,
"value" : 100
},
{
"_id" : 474,
"value" : 100
},
{
"_id" : 475,
"value" : 100
},
{
"_id" : 476,
"value" : 100
},
{
"_id" : 477,
"value" : 100
},
{
"_id" : 478,
"value" : 100
},
{
"_id" : 479,
"value" : 100
},
{
"_id" : 480,
"value" : 100
},
{
"_id" : 481,
"value" : 100
},
{
"_id" : 482,
"value" : 100
},
{
"_id" : 483,
"value" : 100
},
{
"_id" : 484,
"value" : 100
},
{
"_id" : 485,
"value" : 100
},
{
"_id" : 486,
"value" : 100
},
{
"_id" : 487,
"value" : 100
},
{
"_id" : 488,
"value" : 100
},
{
"_id" : 489,
"value" : 100
},
{
"_id" : 490,
"value" : 100
},
{
"_id" : 491,
"value" : 100
},
{
"_id" : 492,
"value" : 100
},
{
"_id" : 493,
"value" : 100
},
{
"_id" : 494,
"value" : 100
},
{
"_id" : 495,
"value" : 100
},
{
"_id" : 496,
"value" : 100
},
{
"_id" : 497,
"value" : 100
},
{
"_id" : 498,
"value" : 100
},
{
"_id" : 499,
"value" : 100
},
{
"_id" : 500,
"value" : 100
},
{
"_id" : 501,
"value" : 100
},
{
"_id" : 502,
"value" : 100
},
{
"_id" : 503,
"value" : 100
},
{
"_id" : 504,
"value" : 100
},
{
"_id" : 505,
"value" : 100
},
{
"_id" : 506,
"value" : 100
},
{
"_id" : 507,
"value" : 100
},
{
"_id" : 508,
"value" : 100
},
{
"_id" : 509,
"value" : 100
},
{
"_id" : 510,
"value" : 100
},
{
"_id" : 511,
"value" : 100
}
],
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 1308,
"timing" : {
"shardProcessing" : 1302,
"postProcessing" : 6
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30001" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_30_inc
m30001| Thu Jun 14 01:41:35 [conn3] build index mrShard.tmp.mr.srcNonSharded_30_inc { 0: 1 }
m30001| Thu Jun 14 01:41:35 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:35 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_30
m30001| Thu Jun 14 01:41:35 [conn3] build index mrShard.tmp.mr.srcNonSharded_30 { _id: 1 }
m30001| Thu Jun 14 01:41:35 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:36 [conn3] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652495_21
m30001| Thu Jun 14 01:41:36 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_30
m30001| Thu Jun 14 01:41:36 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_30
m30001| Thu Jun 14 01:41:36 [conn3] CMD: drop mrShard.tmp.mr.srcNonSharded_30_inc
m30001| Thu Jun 14 01:41:36 [conn3] command mrShard.$cmd command: { mapreduce: "srcNonSharded", map: function map() {
m30001| emit(this.i, 1);
m30001| }, reduce: function reduce(key, values) {
m30001| return Array.sum(values);
m30001| }, out: "tmp.mrs.srcNonSharded_1339652495_21", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 1025 locks(micros) W:66050 r:22343012 w:2872004 reslen:175 1325ms
m30999| Thu Jun 14 01:41:36 [conn] MR with sharded output, NS=mrShardOtherDB.mrReplaceOutSharded
m30999| Thu Jun 14 01:41:36 [conn] enable sharding on: mrShardOtherDB.mrReplaceOutSharded with shard key: { _id: 1 }
m30999| Thu Jun 14 01:41:36 [conn] going to create 1 chunk(s) for: mrShardOtherDB.mrReplaceOutSharded using new epoch 4fd97990607081b222f402a7
m30000| Thu Jun 14 01:41:36 [conn10] build index mrShardOtherDB.mrReplaceOutSharded { _id: 1 }
m30000| Thu Jun 14 01:41:36 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:36 [conn10] info: creating collection mrShardOtherDB.mrReplaceOutSharded on add index
m30999| Thu Jun 14 01:41:36 [conn] ChunkManager: time to load chunks for mrShardOtherDB.mrReplaceOutSharded: 0ms sequenceNumber: 16 version: 1|0||4fd97990607081b222f402a7 based on: (empty)
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion shard0000 localhost:30000 mrShardOtherDB.mrReplaceOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97990607081b222f402a7'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mrShardOtherDB.mrReplaceOutSharded", need_authoritative: true, errmsg: "first time for collection 'mrShardOtherDB.mrReplaceOutSharded'", ok: 0.0 }
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion shard0000 localhost:30000 mrShardOtherDB.mrReplaceOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97990607081b222f402a7'), serverID: ObjectId('4fd97967607081b222f4028f'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9971470
m30000| Thu Jun 14 01:41:36 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:36 [conn] resetting shard version of mrShardOtherDB.mrReplaceOutSharded on localhost:30001, version is zero
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion shard0001 localhost:30001 mrShardOtherDB.mrReplaceOutSharded { setShardVersion: "mrShardOtherDB.mrReplaceOutSharded", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97967607081b222f4028f'), shard: "shard0001", shardHost: "localhost:30001" } 0x9972350
m30999| Thu Jun 14 01:41:36 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:41:36 [conn] created new distributed lock for mrShardOtherDB.mrReplaceOutSharded on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:41:36 [conn] inserting initial doc in config.locks for lock mrShardOtherDB.mrReplaceOutSharded
m30999| Thu Jun 14 01:41:36 [conn] about to acquire distributed lock 'mrShardOtherDB.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383:conn:1369133069",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652455:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:41:36 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97990607081b222f402a8" } }
m30999| { "_id" : "mrShardOtherDB.mrReplaceOutSharded",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:41:36 [conn] distributed lock 'mrShardOtherDB.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' acquired, ts : 4fd97990607081b222f402a8
m30000| Thu Jun 14 01:41:36 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_14
m30000| Thu Jun 14 01:41:36 [conn7] build index mrShardOtherDB.tmp.mr.srcNonSharded_14 { _id: 1 }
m30000| Thu Jun 14 01:41:36 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:36 [conn7] CMD: drop mrShardOtherDB.mrReplaceOutSharded
m30000| Thu Jun 14 01:41:36 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_14
m30000| Thu Jun 14 01:41:36 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_14
m30000| Thu Jun 14 01:41:36 [conn7] CMD: drop mrShardOtherDB.tmp.mr.srcNonSharded_14
m30999| Thu Jun 14 01:41:36 [conn] distributed lock 'mrShardOtherDB.mrReplaceOutSharded/domU-12-31-39-01-70-B4:30999:1339652455:1804289383' unlocked.
m30001| Thu Jun 14 01:41:36 [conn2] CMD: drop mrShard.tmp.mrs.srcNonSharded_1339652495_21
{
"result" : {
"db" : "mrShardOtherDB",
"collection" : "mrReplaceOutSharded"
},
"counts" : {
"input" : NumberLong(51200),
"emit" : NumberLong(51200),
"reduce" : NumberLong(5120),
"output" : NumberLong(512)
},
"timeMillis" : 1348,
"timing" : {
"shardProcessing" : 1326,
"postProcessing" : 22
},
"shardCounts" : {
"localhost:30001" : {
"input" : 51200,
"emit" : 51200,
"reduce" : 5120,
"output" : 512
}
},
"postProcessCounts" : {
"localhost:30000" : {
"input" : NumberLong(512),
"reduce" : NumberLong(0),
"output" : NumberLong(512)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:41:36 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:41:36 [interruptThread] now exiting
m30000| Thu Jun 14 01:41:36 dbexit:
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:41:36 [interruptThread] closing listening socket: 39
m30000| Thu Jun 14 01:41:36 [interruptThread] closing listening socket: 40
m30000| Thu Jun 14 01:41:36 [interruptThread] closing listening socket: 41
m30000| Thu Jun 14 01:41:36 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:41:36 [conn5] end connection 127.0.0.1:48682 (9 connections now open)
m30001| Thu Jun 14 01:41:36 [conn8] end connection 127.0.0.1:48689 (8 connections now open)
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000]
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd97967607081b222f4028f') }
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd97967607081b222f4028f') }
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] Socket recv() errno:104 Connection reset by peer 127.0.0.1:30000
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000]
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] Assertion: 13632:couldn't get updated shard list from config server
m30999| 0x84f514a 0x8126495 0x83f3537 0x8529795 0x8522fb0 0x8530078 0x832f1b0 0x833179e 0x813c30e 0x11e542 0x25eb6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15StaticShardInfo6reloadEv+0xf05) [0x8529795]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo5Shard15reloadShardInfoEv+0x20) [0x8522fb0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo17WriteBackListener3runEv+0x3838) [0x8530078]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x11e542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x25eb6e]
m30999| Thu Jun 14 01:41:36 [WriteBackListener-localhost:30000] ERROR: backgroundjob WriteBackListener-localhost:30000error: couldn't get updated shard list from config server
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:41:36 [conn12] end connection 127.0.0.1:60110 (14 connections now open)
m30000| Thu Jun 14 01:41:36 [conn6] end connection 127.0.0.1:60098 (13 connections now open)
m30000| Thu Jun 14 01:41:36 [conn14] end connection 127.0.0.1:60113 (14 connections now open)
m30000| Thu Jun 14 01:41:36 [conn13] end connection 127.0.0.1:60112 (11 connections now open)
m30000| Thu Jun 14 01:41:36 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:41:36 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:41:36 dbexit: really exiting now
m30001| Thu Jun 14 01:41:37 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:41:37 [interruptThread] now exiting
m30001| Thu Jun 14 01:41:37 dbexit:
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:41:37 [interruptThread] closing listening socket: 42
m30001| Thu Jun 14 01:41:37 [interruptThread] closing listening socket: 43
m30001| Thu Jun 14 01:41:37 [interruptThread] closing listening socket: 44
m30001| Thu Jun 14 01:41:37 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:41:37 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:41:37 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:41:37 dbexit: really exiting now
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] SocketException: remote: 127.0.0.1:30001 error: 9001 socket exception [0] server [127.0.0.1:30001]
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd97967607081b222f4028f') }
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd97967607081b222f4028f') }
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000]
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:41:37 [WriteBackListener-localhost:30001] Assertion: 13632:couldn't get updated shard list from config server
m30999| 0x84f514a 0x8126495 0x83f3537 0x8529795 0x8522fb0 0x8530078 0x832f1b0 0x833179e 0x813c30e 0x11e542 0x25eb6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15StaticShardInfo6reloadEv+0xf05) [0x8529795]
m30999| /mnt/slaves/Linux_32bit/mo 45475.098848ms
Thu Jun 14 01:41:39 [initandlisten] connection accepted from 127.0.0.1:34774 #44 (31 connections now open)
*******************************************
Test : migrateBig.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/migrateBig.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/migrateBig.js";TestData.testFile = "migrateBig.js";TestData.testName = "migrateBig";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:41:39 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/migrateBig0'
Thu Jun 14 01:41:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/migrateBig0
m30000| Thu Jun 14 01:41:39
m30000| Thu Jun 14 01:41:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:41:39
m30000| Thu Jun 14 01:41:39 [initandlisten] MongoDB starting : pid=26741 port=30000 dbpath=/data/db/migrateBig0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:41:39 [initandlisten]
m30000| Thu Jun 14 01:41:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:41:39 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:41:39 [initandlisten]
m30000| Thu Jun 14 01:41:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:41:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:41:39 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:41:39 [initandlisten]
m30000| Thu Jun 14 01:41:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:41:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:41:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:41:39 [initandlisten] options: { dbpath: "/data/db/migrateBig0", port: 30000 }
m30000| Thu Jun 14 01:41:39 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:41:39 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/migrateBig1'
m30000| Thu Jun 14 01:41:39 [initandlisten] connection accepted from 127.0.0.1:60120 #1 (1 connection now open)
Thu Jun 14 01:41:39 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/migrateBig1
m30001| Thu Jun 14 01:41:39
m30001| Thu Jun 14 01:41:39 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:41:39
m30001| Thu Jun 14 01:41:39 [initandlisten] MongoDB starting : pid=26754 port=30001 dbpath=/data/db/migrateBig1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:41:39 [initandlisten]
m30001| Thu Jun 14 01:41:39 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:41:39 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:41:39 [initandlisten]
m30001| Thu Jun 14 01:41:39 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:41:39 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:41:39 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:41:39 [initandlisten]
m30001| Thu Jun 14 01:41:39 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:41:39 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:41:39 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:41:39 [initandlisten] options: { dbpath: "/data/db/migrateBig1", port: 30001 }
m30001| Thu Jun 14 01:41:39 [websvr] admin web console waiting for connections on port 31001
m30001| Thu Jun 14 01:41:39 [initandlisten] waiting for connections on port 30001
"localhost:30000"
ShardingTest migrateBig :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:41:40 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30001| Thu Jun 14 01:41:40 [initandlisten] connection accepted from 127.0.0.1:48697 #1 (1 connection now open)
m30000| Thu Jun 14 01:41:40 [initandlisten] connection accepted from 127.0.0.1:60123 #2 (2 connections now open)
m30000| Thu Jun 14 01:41:40 [FileAllocator] allocating new datafile /data/db/migrateBig0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:41:40 [FileAllocator] creating directory /data/db/migrateBig0/_tmp
m30000| Thu Jun 14 01:41:40 [initandlisten] connection accepted from 127.0.0.1:60125 #3 (3 connections now open)
m30999| Thu Jun 14 01:41:40 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:41:40 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26767 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:41:40 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:41:40 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:41:40 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:41:40 [FileAllocator] done allocating datafile /data/db/migrateBig0/config.ns, size: 16MB, took 0.261 secs
m30000| Thu Jun 14 01:41:40 [FileAllocator] allocating new datafile /data/db/migrateBig0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:41:40 [FileAllocator] done allocating datafile /data/db/migrateBig0/config.0, size: 16MB, took 0.649 secs
m30000| Thu Jun 14 01:41:40 [FileAllocator] allocating new datafile /data/db/migrateBig0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:41:40 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn2] insert config.settings keyUpdates:0 locks(micros) w:921254 921ms
m30000| Thu Jun 14 01:41:40 [initandlisten] connection accepted from 127.0.0.1:60130 #4 (4 connections now open)
m30000| Thu Jun 14 01:41:40 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:41:40 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:41:40 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:40 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:41:40 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:41:40 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:41:40 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:41:40 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:41:40
m30999| Thu Jun 14 01:41:40 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:41:40 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [initandlisten] connection accepted from 127.0.0.1:60131 #5 (5 connections now open)
m30000| Thu Jun 14 01:41:40 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd97994a5cef1e94c12348e
m30999| Thu Jun 14 01:41:40 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
m30999| Thu Jun 14 01:41:40 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652500:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:41:40 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:40 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:41:40 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:41:41 [mongosMain] connection accepted from 127.0.0.1:54194 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:41:41 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:41:41 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:41:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:41 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:41:41 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:41:41 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30001| Thu Jun 14 01:41:41 [initandlisten] connection accepted from 127.0.0.1:48708 #2 (2 connections now open)
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:41:41 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:41:41 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:41:41 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:41:41 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { x: 1.0 } }
m30999| Thu Jun 14 01:41:41 [conn] enable sharding on: test.foo with shard key: { x: 1.0 }
m30999| Thu Jun 14 01:41:41 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:41 [FileAllocator] allocating new datafile /data/db/migrateBig1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:41:41 [FileAllocator] creating directory /data/db/migrateBig1/_tmp
m30999| Thu Jun 14 01:41:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd97995a5cef1e94c12348f based on: (empty)
m30000| Thu Jun 14 01:41:41 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:41:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:41:41 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97994a5cef1e94c12348d
m30000| Thu Jun 14 01:41:41 [initandlisten] connection accepted from 127.0.0.1:60134 #6 (6 connections now open)
m30999| Thu Jun 14 01:41:41 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:41:41 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97994a5cef1e94c12348d
m30001| Thu Jun 14 01:41:41 [initandlisten] connection accepted from 127.0.0.1:48710 #3 (3 connections now open)
m30000| Thu Jun 14 01:41:41 [FileAllocator] done allocating datafile /data/db/migrateBig0/config.1, size: 32MB, took 0.829 secs
m30001| Thu Jun 14 01:41:42 [FileAllocator] done allocating datafile /data/db/migrateBig1/test.ns, size: 16MB, took 0.613 secs
m30001| Thu Jun 14 01:41:42 [FileAllocator] allocating new datafile /data/db/migrateBig1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:41:42 [FileAllocator] done allocating datafile /data/db/migrateBig1/test.0, size: 16MB, took 0.317 secs
m30001| Thu Jun 14 01:41:42 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:41:42 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:42 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:41:42 [conn2] build index test.foo { x: 1.0 }
m30001| Thu Jun 14 01:41:42 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:41:42 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:7 W:85 r:262 w:1639117 1639ms
m30001| Thu Jun 14 01:41:42 [FileAllocator] allocating new datafile /data/db/migrateBig1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:41:42 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97994a5cef1e94c12348d'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:357 reslen:51 1636ms
m30001| Thu Jun 14 01:41:42 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:41:42 [initandlisten] connection accepted from 127.0.0.1:60136 #7 (7 connections now open)
m30001| Thu Jun 14 01:41:42 [initandlisten] connection accepted from 127.0.0.1:48712 #4 (4 connections now open)
m30001| Thu Jun 14 01:41:42 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30001| Thu Jun 14 01:41:42 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30001| Thu Jun 14 01:41:42 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] warning: chunk is larger than 1024 bytes because of key { x: 0.0 }
m30001| Thu Jun 14 01:41:42 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 0.0 } ], shardId: "test.foo-x_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97996e03c9aca9cf9cf7e
m30001| Thu Jun 14 01:41:42 [conn4] splitChunk accepted at version 1|0||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:42-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652502724), what: "split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:41:42 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652502:275532884 (sleeping for 30000ms)
m30000| Thu Jun 14 01:41:42 [initandlisten] connection accepted from 127.0.0.1:60138 #8 (8 connections now open)
m30001| Thu Jun 14 01:41:42 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 53.0 } ], shardId: "test.foo-x_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97996e03c9aca9cf9cf7f
m30001| Thu Jun 14 01:41:42 [conn4] splitChunk accepted at version 1|2||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:42-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652502737), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 53.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 53.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd97995a5cef1e94c12348f based on: 1|0||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:41:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd97995a5cef1e94c12348f based on: 1|2||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } on: { x: 53.0 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:41:42 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { x: 0.0 } max: { x: 53.0 }
m30001| Thu Jun 14 01:41:42 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: 53.0 }, from: "shard0001", splitKeys: [ { x: 33.0 } ], shardId: "test.foo-x_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97996e03c9aca9cf9cf80
m30001| Thu Jun 14 01:41:42 [conn4] splitChunk accepted at version 1|4||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:42-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652502779), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: 53.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 33.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 33.0 }, max: { x: 53.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd97995a5cef1e94c12348f based on: 1|4||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:42 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { x: 53.0 } max: { x: MaxKey }
m30001| Thu Jun 14 01:41:42 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 53.0 }, max: { x: MaxKey }, from: "shard0001", splitKeys: [ { x: 66.0 } ], shardId: "test.foo-x_53.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97996e03c9aca9cf9cf81
m30001| Thu Jun 14 01:41:42 [conn4] splitChunk accepted at version 1|6||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:42-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652502783), what: "split", ns: "test.foo", details: { before: { min: { x: 53.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 53.0 }, max: { x: 66.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 66.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||4fd97995a5cef1e94c12348f based on: 1|6||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:42 [conn] CMD: movechunk: { movechunk: "test.foo", find: { x: 90.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:41:42 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { x: 66.0 } max: { x: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:41:42 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 66.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_66.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:42 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:42 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97996e03c9aca9cf9cf82
m30001| Thu Jun 14 01:41:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:42-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652502786), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 66.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:42 [conn4] moveChunk request accepted at version 1|8||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:42 [conn4] moveChunk number of documents: 34
m30001| Thu Jun 14 01:41:42 [initandlisten] connection accepted from 127.0.0.1:48714 #5 (5 connections now open)
m30000| Thu Jun 14 01:41:42 [FileAllocator] allocating new datafile /data/db/migrateBig0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:41:43 [FileAllocator] done allocating datafile /data/db/migrateBig1/test.1, size: 32MB, took 0.698 secs
m30000| Thu Jun 14 01:41:43 [FileAllocator] done allocating datafile /data/db/migrateBig0/test.ns, size: 16MB, took 0.765 secs
m30000| Thu Jun 14 01:41:43 [FileAllocator] allocating new datafile /data/db/migrateBig0/test.0, filling with zeroes...
m30001| Thu Jun 14 01:41:43 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 66.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:41:43 [FileAllocator] done allocating datafile /data/db/migrateBig0/test.0, size: 16MB, took 0.292 secs
m30000| Thu Jun 14 01:41:43 [FileAllocator] allocating new datafile /data/db/migrateBig0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:41:43 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:41:43 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:43 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:41:43 [migrateThread] build index test.foo { x: 1.0 }
m30000| Thu Jun 14 01:41:43 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:41:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 66.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:41:44 [FileAllocator] done allocating datafile /data/db/migrateBig0/test.1, size: 32MB, took 0.608 secs
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 66.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 34, clonedBytes: 341462, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk setting version to: 2|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:41:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 66.0 } -> { x: MaxKey }
m30000| Thu Jun 14 01:41:44 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652504797), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 66.0 }, max: { x: MaxKey }, step1 of 5: 1069, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 933 } }
m30000| Thu Jun 14 01:41:44 [initandlisten] connection accepted from 127.0.0.1:60140 #9 (9 connections now open)
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 66.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 34, clonedBytes: 341462, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk updating self version to: 2|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504802), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 66.0 }, max: { x: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:44 [conn4] doing delete inline
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk deleted: 34
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504807), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 66.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2006, step5 of 6: 8, step6 of 6: 3 } }
m30001| Thu Jun 14 01:41:44 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 66.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_66.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:774 w:2758 reslen:37 2021ms
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 2|1||4fd97995a5cef1e94c12348f based on: 1|8||4fd97995a5cef1e94c12348f
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 4
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 33 } on : shard0001 Timestamp(1000, 5)
{ "x" : 33 } -->> { "x" : 53 } on : shard0001 Timestamp(1000, 6)
{ "x" : 53 } -->> { "x" : 66 } on : shard0001 Timestamp(1000, 7)
{ "x" : 66 } -->> { "x" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
YO : localhost:30001
direct : connection to localhost:30001
m30001| Thu Jun 14 01:41:44 [initandlisten] connection accepted from 127.0.0.1:48716 #6 (6 connections now open)
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 4
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 33 } on : shard0001 Timestamp(1000, 5)
{ "x" : 33 } -->> { "x" : 53 } on : shard0001 Timestamp(1000, 6)
{ "x" : 53 } -->> { "x" : 66 } on : shard0001 Timestamp(1000, 7)
{ "x" : 66 } -->> { "x" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
m30999| Thu Jun 14 01:41:44 [conn] CMD: movechunk: { movechunk: "test.foo", find: { x: 50.0 }, to: "localhost:30000" }
m30999| Thu Jun 14 01:41:44 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 33.0 } max: { x: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30999| Thu Jun 14 01:41:44 [conn] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 2310120, errmsg: "chunk too big to move", ok: 0.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 33.0 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_33.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf83
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504940), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:44 [conn4] moveChunk request accepted at version 2|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] warning: can't move chunk of size (approximately) 2310120 because maximum size allowed to move is 1048576 ns: test.foo { x: 33.0 } -> { x: 53.0 }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504942), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 53.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { x: 0.0 } max: { x: 33.0 }
command { "split" : "test.foo", "middle" : { "x" : 0 } } failed: { "ok" : 0, "errmsg" : "cannot split on initial or final chunk's key" }
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { x: 0.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 2.0 } ], shardId: "test.foo-x_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf84
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504945), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: 33.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 2.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 2.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 2|3||4fd97995a5cef1e94c12348f based on: 2|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { x: 2.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 2.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 4.0 } ], shardId: "test.foo-x_2.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf85
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|3||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504949), what: "split", ns: "test.foo", details: { before: { min: { x: 2.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 2.0 }, max: { x: 4.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 4.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 2|5||4fd97995a5cef1e94c12348f based on: 2|3||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { x: 4.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 4.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 6.0 } ], shardId: "test.foo-x_4.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf86
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|5||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504953), what: "split", ns: "test.foo", details: { before: { min: { x: 4.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 4.0 }, max: { x: 6.0 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 6.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 2|7||4fd97995a5cef1e94c12348f based on: 2|5||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { x: 6.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 6.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 8.0 } ], shardId: "test.foo-x_6.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf87
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|7||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504956), what: "split", ns: "test.foo", details: { before: { min: { x: 6.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 6.0 }, max: { x: 8.0 }, lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 8.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 2|9||4fd97995a5cef1e94c12348f based on: 2|7||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { x: 8.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 8.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 10.0 } ], shardId: "test.foo-x_8.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf88
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|9||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504960), what: "split", ns: "test.foo", details: { before: { min: { x: 8.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 8.0 }, max: { x: 10.0 }, lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 10.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 2|11||4fd97995a5cef1e94c12348f based on: 2|9||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|11||000000000000000000000000 min: { x: 10.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 10.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 12.0 } ], shardId: "test.foo-x_10.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf89
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|11||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504964), what: "split", ns: "test.foo", details: { before: { min: { x: 10.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 10.0 }, max: { x: 12.0 }, lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 12.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 2|13||4fd97995a5cef1e94c12348f based on: 2|11||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { x: 12.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 12.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 14.0 } ], shardId: "test.foo-x_12.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf8a
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|13||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504967), what: "split", ns: "test.foo", details: { before: { min: { x: 12.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 12.0 }, max: { x: 14.0 }, lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 14.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 2|15||4fd97995a5cef1e94c12348f based on: 2|13||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|15||000000000000000000000000 min: { x: 14.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 14.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 16.0 } ], shardId: "test.foo-x_14.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf8b
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|15||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504971), what: "split", ns: "test.foo", details: { before: { min: { x: 14.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 14.0 }, max: { x: 16.0 }, lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 16.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 2|17||4fd97995a5cef1e94c12348f based on: 2|15||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:44 [conn] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|17||000000000000000000000000 min: { x: 16.0 } max: { x: 33.0 }
m30001| Thu Jun 14 01:41:44 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 16.0 }, max: { x: 33.0 }, from: "shard0001", splitKeys: [ { x: 18.0 } ], shardId: "test.foo-x_16.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd97998e03c9aca9cf9cf8c
m30001| Thu Jun 14 01:41:44 [conn4] splitChunk accepted at version 2|17||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:44-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652504974), what: "split", ns: "test.foo", details: { before: { min: { x: 16.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 16.0 }, max: { x: 18.0 }, lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 18.0 }, max: { x: 33.0 }, lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 2|19||4fd97995a5cef1e94c12348f based on: 2|17||4fd97995a5cef1e94c12348f
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0001" }
test.foo chunks:
shard0001 13
shard0000 1
{ "x" : { $minKey : 1 } } -->> { "x" : 0 } on : shard0001 Timestamp(2000, 1)
{ "x" : 0 } -->> { "x" : 2 } on : shard0001 Timestamp(2000, 2)
{ "x" : 2 } -->> { "x" : 4 } on : shard0001 Timestamp(2000, 4)
{ "x" : 4 } -->> { "x" : 6 } on : shard0001 Timestamp(2000, 6)
{ "x" : 6 } -->> { "x" : 8 } on : shard0001 Timestamp(2000, 8)
{ "x" : 8 } -->> { "x" : 10 } on : shard0001 Timestamp(2000, 10)
{ "x" : 10 } -->> { "x" : 12 } on : shard0001 Timestamp(2000, 12)
{ "x" : 12 } -->> { "x" : 14 } on : shard0001 Timestamp(2000, 14)
{ "x" : 14 } -->> { "x" : 16 } on : shard0001 Timestamp(2000, 16)
{ "x" : 16 } -->> { "x" : 18 } on : shard0001 Timestamp(2000, 18)
{ "x" : 18 } -->> { "x" : 33 } on : shard0001 Timestamp(2000, 19)
{ "x" : 33 } -->> { "x" : 53 } on : shard0001 Timestamp(1000, 6)
{ "x" : 53 } -->> { "x" : 66 } on : shard0001 Timestamp(1000, 7)
{ "x" : 66 } -->> { "x" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
ShardingTest input: { "shard0000" : 1, "shard0001" : 13 } min: 1 max: 13
chunk diff: 12
ShardingTest input: { "shard0000" : 1, "shard0001" : 13 } min: 1 max: 13
chunk diff: 12
ShardingTest input: { "shard0000" : 1, "shard0001" : 13 } min: 1 max: 13
chunk diff: 12
m30000| Thu Jun 14 01:41:50 [initandlisten] connection accepted from 127.0.0.1:60142 #10 (10 connections now open)
m30999| Thu Jun 14 01:41:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd9799ea5cef1e94c123490
m30999| Thu Jun 14 01:41:50 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:50 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:50 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:50 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:50 [Balancer] shard0000
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:50 [Balancer] shard0001
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 53.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] ----
m30999| Thu Jun 14 01:41:50 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_53.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:50 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { x: 53.0 } max: { x: 66.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:41:50 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 53.0 }, max: { x: 66.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_53.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:50 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:50 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd9799ee03c9aca9cf9cf8d
m30001| Thu Jun 14 01:41:50 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:50-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652510946), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 66.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:50 [conn4] moveChunk request accepted at version 2|19||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:50 [conn4] moveChunk number of documents: 13
m30000| Thu Jun 14 01:41:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 53.0 } -> { x: 66.0 }
ShardingTest input: { "shard0000" : 1, "shard0001" : 13 } min: 1 max: 13
chunk diff: 12
m30001| Thu Jun 14 01:41:51 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 53.0 }, max: { x: 66.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 13, clonedBytes: 130559, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:41:51 [conn4] moveChunk setting version to: 3|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:41:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 53.0 } -> { x: 66.0 }
m30000| Thu Jun 14 01:41:51 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:51-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652511958), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 66.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 1008 } }
m30001| Thu Jun 14 01:41:51 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 53.0 }, max: { x: 66.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 13, clonedBytes: 130559, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:41:51 [conn4] moveChunk updating self version to: 3|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:41:51 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:51-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652511962), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 66.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:51 [conn4] doing delete inline
m30001| Thu Jun 14 01:41:51 [conn4] moveChunk deleted: 13
m30001| Thu Jun 14 01:41:51 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:41:51 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:51-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652511964), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 53.0 }, max: { x: 66.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 1 } }
m30001| Thu Jun 14 01:41:51 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 53.0 }, max: { x: 66.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_53.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:1323 w:3852 reslen:37 1018ms
m30999| Thu Jun 14 01:41:51 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 3|1||4fd97995a5cef1e94c12348f based on: 2|19||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:51 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 2, "shard0001" : 12 } min: 2 max: 12
chunk diff: 10
ShardingTest input: { "shard0000" : 2, "shard0001" : 12 } min: 2 max: 12
chunk diff: 10
m30999| Thu Jun 14 01:41:56 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979a4a5cef1e94c123491
m30999| Thu Jun 14 01:41:56 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:41:56 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:56 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:41:56 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:41:56 [Balancer] shard0000
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:41:56 [Balancer] shard0001
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 53.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] ----
m30999| Thu Jun 14 01:41:56 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_33.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 53.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:41:56 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { x: 33.0 } max: { x: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:41:56 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 33.0 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_33.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:56 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979a4e03c9aca9cf9cf8e
m30001| Thu Jun 14 01:41:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:56-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652516970), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:41:56 [conn4] moveChunk request accepted at version 3|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:56 [conn4] warning: can't move chunk of size (approximately) 2310120 because maximum size allowed to move is 1048576 ns: test.foo { x: 33.0 } -> { x: 53.0 }
m30001| Thu Jun 14 01:41:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:41:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:56-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652516972), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 53.0 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } }
m30999| Thu Jun 14 01:41:56 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 2310120, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:41:56 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 2310120, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { x: 33.0 } max: { x: 33.0 }
m30999| Thu Jun 14 01:41:56 [Balancer] forcing a split because migrate failed for size reasons
m30001| Thu Jun 14 01:41:56 [conn4] request split points lookup for chunk test.foo { : 33.0 } -->> { : 53.0 }
m30001| Thu Jun 14 01:41:56 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 33.0 }, max: { x: 53.0 }, from: "shard0001", splitKeys: [ { x: 50.520025989117 } ], shardId: "test.foo-x_33.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:41:56 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:41:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979a4e03c9aca9cf9cf8f
m30001| Thu Jun 14 01:41:56 [conn4] splitChunk accepted at version 3|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:41:56 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:41:56-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652516975), what: "split", ns: "test.foo", details: { before: { min: { x: 33.0 }, max: { x: 53.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 33.0 }, max: { x: 50.520025989117 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') }, right: { min: { x: 50.520025989117 }, max: { x: 53.0 }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f') } } }
m30001| Thu Jun 14 01:41:56 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30999| Thu Jun 14 01:41:56 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 3|3||4fd97995a5cef1e94c12348f based on: 3|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:41:56 [Balancer] forced split results: { ok: 1.0 }
m30999| Thu Jun 14 01:41:56 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
m30999| Thu Jun 14 01:42:06 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979aea5cef1e94c123492
m30999| Thu Jun 14 01:42:06 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:42:06 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:06 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:06 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:42:06 [Balancer] shard0000
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:06 [Balancer] shard0001
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] ----
m30999| Thu Jun 14 01:42:06 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:06 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|3||000000000000000000000000 min: { x: 50.520025989117 } max: { x: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:06 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 50.520025989117 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_50.520025989117", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:06 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:06 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979aee03c9aca9cf9cf90
m30001| Thu Jun 14 01:42:06 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:06-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652526983), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 50.520025989117 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:06 [conn4] moveChunk request accepted at version 3|3||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:42:06 [conn4] moveChunk number of documents: 99
m30000| Thu Jun 14 01:42:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 50.520025989117 } -> { x: 53.0 }
ShardingTest input: { "shard0000" : 2, "shard0001" : 13 } min: 2 max: 13
chunk diff: 11
m30001| Thu Jun 14 01:42:07 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 50.520025989117 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 99, clonedBytes: 994257, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:07 [conn4] moveChunk setting version to: 4|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:42:07 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 50.520025989117 } -> { x: 53.0 }
m30000| Thu Jun 14 01:42:07 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:07-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652527999), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 50.520025989117 }, max: { x: 53.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:42:08 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 50.520025989117 }, max: { x: 53.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 99, clonedBytes: 994257, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:08 [conn4] moveChunk updating self version to: 4|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:08-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652528003), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 50.520025989117 }, max: { x: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:08 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:08 [conn4] moveChunk deleted: 99
m30001| Thu Jun 14 01:42:08 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:42:08 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:08-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652528012), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 50.520025989117 }, max: { x: 53.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 16, step6 of 6: 8 } }
m30001| Thu Jun 14 01:42:08 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 50.520025989117 }, max: { x: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_50.520025989117", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2476 w:11361 reslen:37 1030ms
m30999| Thu Jun 14 01:42:08 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 4|1||4fd97995a5cef1e94c12348f based on: 3|3||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:42:08 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 3, "shard0001" : 12 } min: 3 max: 12
chunk diff: 9
ShardingTest input: { "shard0000" : 3, "shard0001" : 12 } min: 3 max: 12
chunk diff: 9
m30999| Thu Jun 14 01:42:13 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979b5a5cef1e94c123493
m30999| Thu Jun 14 01:42:13 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:42:13 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:13 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:13 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:42:13 [Balancer] shard0000
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:13 [Balancer] shard0001
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] ----
m30999| Thu Jun 14 01:42:13 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_33.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:13 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|2||000000000000000000000000 min: { x: 33.0 } max: { x: 50.520025989117 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:13 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 33.0 }, max: { x: 50.520025989117 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_33.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:13 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:13 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979b5e03c9aca9cf9cf91
m30001| Thu Jun 14 01:42:13 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:13-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652533019), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 50.520025989117 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:13 [conn4] moveChunk request accepted at version 4|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:42:13 [conn4] moveChunk number of documents: 131
ShardingTest input: { "shard0000" : 3, "shard0001" : 12 } min: 3 max: 12
chunk diff: 9
m30000| Thu Jun 14 01:42:13 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 33.0 } -> { x: 50.520025989117 }
m30001| Thu Jun 14 01:42:14 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 33.0 }, max: { x: 50.520025989117 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 131, clonedBytes: 1315633, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:14 [conn4] moveChunk setting version to: 5|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:42:14 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 33.0 } -> { x: 50.520025989117 }
m30000| Thu Jun 14 01:42:14 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:14-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652534031), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 50.520025989117 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 986 } }
m30001| Thu Jun 14 01:42:14 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 33.0 }, max: { x: 50.520025989117 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 131, clonedBytes: 1315633, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:14 [conn4] moveChunk updating self version to: 5|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:14-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652534036), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 50.520025989117 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:14 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:14 [conn4] moveChunk deleted: 131
m30001| Thu Jun 14 01:42:14 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:42:14 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:14-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652534047), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 33.0 }, max: { x: 50.520025989117 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 10 } }
m30001| Thu Jun 14 01:42:14 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 33.0 }, max: { x: 50.520025989117 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_33.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2785 w:21272 reslen:37 1028ms
m30999| Thu Jun 14 01:42:14 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 5|1||4fd97995a5cef1e94c12348f based on: 4|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:42:14 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 4, "shard0001" : 11 } min: 4 max: 11
chunk diff: 7
ShardingTest input: { "shard0000" : 4, "shard0001" : 11 } min: 4 max: 11
chunk diff: 7
m30999| Thu Jun 14 01:42:19 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979bba5cef1e94c123494
m30999| Thu Jun 14 01:42:19 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:42:19 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:19 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:19 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:42:19 [Balancer] shard0000
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:19 [Balancer] shard0001
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] ----
m30999| Thu Jun 14 01:42:19 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_18.0", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:19 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 2|19||000000000000000000000000 min: { x: 18.0 } max: { x: 33.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:19 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 18.0 }, max: { x: 33.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_18.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:19 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:19 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979bbe03c9aca9cf9cf92
m30001| Thu Jun 14 01:42:19 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:19-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652539056), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 18.0 }, max: { x: 33.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:19 [conn4] moveChunk request accepted at version 5|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:42:19 [conn4] moveChunk number of documents: 15
ShardingTest input: { "shard0000" : 4, "shard0001" : 11 } min: 4 max: 11
chunk diff: 7
m30000| Thu Jun 14 01:42:19 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 18.0 } -> { x: 33.0 }
m30001| Thu Jun 14 01:42:20 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 18.0 }, max: { x: 33.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 15, clonedBytes: 150645, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:20 [conn4] moveChunk setting version to: 6|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:42:20 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 18.0 } -> { x: 33.0 }
m30000| Thu Jun 14 01:42:20 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:20-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652540060), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 18.0 }, max: { x: 33.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 3, step4 of 5: 0, step5 of 5: 999 } }
m30001| Thu Jun 14 01:42:20 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 18.0 }, max: { x: 33.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 15, clonedBytes: 150645, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:20 [conn4] moveChunk updating self version to: 6|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:20-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652540064), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 18.0 }, max: { x: 33.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:20 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:20 [conn4] moveChunk deleted: 15
m30001| Thu Jun 14 01:42:20 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:42:20 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:20-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652540066), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 18.0 }, max: { x: 33.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 4, step6 of 6: 1 } }
m30001| Thu Jun 14 01:42:20 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 18.0 }, max: { x: 33.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_18.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2894 w:22566 reslen:37 1011ms
m30999| Thu Jun 14 01:42:20 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 6|1||4fd97995a5cef1e94c12348f based on: 5|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:42:20 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 5, "shard0001" : 10 } min: 5 max: 10
chunk diff: 5
ShardingTest input: { "shard0000" : 5, "shard0001" : 10 } min: 5 max: 10
chunk diff: 5
m30999| Thu Jun 14 01:42:25 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979c1a5cef1e94c123495
m30001| Thu Jun 14 01:42:25 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 16.0 }, max: { x: 18.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_16.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:25 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:25 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979c1e03c9aca9cf9cf93
m30001| Thu Jun 14 01:42:25 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:25-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652545072), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 16.0 }, max: { x: 18.0 }, from: "shard0001", to: "shard0000" } }
ShardingTest input: { "shard0000" : 5, "shard0001" : 10 } min: 5 max: 10
chunk diff: 5
m30001| Thu Jun 14 01:42:25 [conn4] moveChunk request accepted at version 6|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:42:25 [conn4] moveChunk number of documents: 2
m30999| Thu Jun 14 01:42:25 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:42:25 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:25 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:25 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:42:25 [Balancer] shard0000
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:25 [Balancer] shard0001
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] ----
m30999| Thu Jun 14 01:42:25 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_16.0", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:25 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 2|18||000000000000000000000000 min: { x: 16.0 } max: { x: 18.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:42:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 16.0 } -> { x: 18.0 }
m30001| Thu Jun 14 01:42:26 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 16.0 }, max: { x: 18.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 2, clonedBytes: 20086, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:26 [conn4] moveChunk setting version to: 7|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:42:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 16.0 } -> { x: 18.0 }
m30000| Thu Jun 14 01:42:26 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:26-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652546084), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 16.0 }, max: { x: 18.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1010 } }
m30001| Thu Jun 14 01:42:26 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 16.0 }, max: { x: 18.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 2, clonedBytes: 20086, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:26 [conn4] moveChunk updating self version to: 7|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:26-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652546088), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 16.0 }, max: { x: 18.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:26 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:26 [conn4] moveChunk deleted: 2
m30001| Thu Jun 14 01:42:26 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:42:26 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:26-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652546089), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 16.0 }, max: { x: 18.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1002, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:42:26 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 16.0 }, max: { x: 18.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_16.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2975 w:22960 reslen:37 1018ms
m30999| Thu Jun 14 01:42:26 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 7|1||4fd97995a5cef1e94c12348f based on: 6|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:42:26 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 6, "shard0001" : 9 } min: 6 max: 9
chunk diff: 3
ShardingTest input: { "shard0000" : 6, "shard0001" : 9 } min: 6 max: 9
chunk diff: 3
ShardingTest input: { "shard0000" : 6, "shard0001" : 9 } min: 6 max: 9
chunk diff: 3
m30999| Thu Jun 14 01:42:31 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' acquired, ts : 4fd979c7a5cef1e94c123496
m30001| Thu Jun 14 01:42:31 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 14.0 }, max: { x: 16.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_14.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' acquired, ts : 4fd979c7e03c9aca9cf9cf94
m30001| Thu Jun 14 01:42:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:31-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652551096), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 14.0 }, max: { x: 16.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:31 [conn4] moveChunk request accepted at version 7|1||4fd97995a5cef1e94c12348f
m30001| Thu Jun 14 01:42:31 [conn4] moveChunk number of documents: 2
m30999| Thu Jun 14 01:42:31 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:42:31 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:31 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:42:31 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:42:31 [Balancer] shard0000
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_16.0", lastmod: Timestamp 7000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 16.0 }, max: { x: 18.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_18.0", lastmod: Timestamp 6000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 18.0 }, max: { x: 33.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_33.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 33.0 }, max: { x: 50.520025989117 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_50.520025989117", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 50.520025989117 }, max: { x: 53.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_53.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 53.0 }, max: { x: 66.0 }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_66.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 66.0 }, max: { x: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:42:31 [Balancer] shard0001
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 0.0 }, max: { x: 2.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_2.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 2.0 }, max: { x: 4.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_4.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 4.0 }, max: { x: 6.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_6.0", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 6.0 }, max: { x: 8.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_8.0", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 8.0 }, max: { x: 10.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_10.0", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 10.0 }, max: { x: 12.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_12.0", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 12.0 }, max: { x: 14.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] ----
m30999| Thu Jun 14 01:42:31 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-x_14.0", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97995a5cef1e94c12348f'), ns: "test.foo", min: { x: 14.0 }, max: { x: 16.0 }, shard: "shard0001" }
m30999| Thu Jun 14 01:42:31 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { x: 14.0 } max: { x: 16.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:42:31 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 14.0 } -> { x: 16.0 }
m30001| Thu Jun 14 01:42:32 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { x: 14.0 }, max: { x: 16.0 }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 2, clonedBytes: 20086, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:32 [conn4] moveChunk setting version to: 8|0||4fd97995a5cef1e94c12348f
m30000| Thu Jun 14 01:42:32 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 14.0 } -> { x: 16.0 }
m30000| Thu Jun 14 01:42:32 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:32-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652552108), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 14.0 }, max: { x: 16.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1010 } }
m30001| Thu Jun 14 01:42:32 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { x: 14.0 }, max: { x: 16.0 }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 2, clonedBytes: 20086, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:32 [conn4] moveChunk updating self version to: 8|1||4fd97995a5cef1e94c12348f through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:32-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652552112), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 14.0 }, max: { x: 16.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:32 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:32 [conn4] moveChunk deleted: 2
m30001| Thu Jun 14 01:42:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652502:275532884' unlocked.
m30001| Thu Jun 14 01:42:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:32-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48712", time: new Date(1339652552114), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 14.0 }, max: { x: 16.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 12, step6 of 6: 0 } }
m30001| Thu Jun 14 01:42:32 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { x: 14.0 }, max: { x: 16.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_14.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:3076 w:23388 reslen:37 1018ms
m30999| Thu Jun 14 01:42:32 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 8|1||4fd97995a5cef1e94c12348f based on: 7|1||4fd97995a5cef1e94c12348f
m30999| Thu Jun 14 01:42:32 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652500:1804289383' unlocked.
ShardingTest input: { "shard0000" : 7, "shard0001" : 8 } min: 7 max: 8
chunk diff: 1
m30000| Thu Jun 14 01:42:33 [conn6] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:42:33 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:42:33 [conn3] end connection 127.0.0.1:60125 (9 connections now open)
m30000| Thu Jun 14 01:42:33 [conn5] end connection 127.0.0.1:60131 (8 connections now open)
m30000| Thu Jun 14 01:42:33 [conn6] end connection 127.0.0.1:60134 (7 connections now open)
m30000| Thu Jun 14 01:42:33 [conn10] end connection 127.0.0.1:60142 (7 connections now open)
m30001| Thu Jun 14 01:42:33 [conn4] end connection 127.0.0.1:48712 (5 connections now open)
m30001| Thu Jun 14 01:42:33 [conn3] end connection 127.0.0.1:48710 (4 connections now open)
Thu Jun 14 01:42:34 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:42:34 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:42:34 [interruptThread] now exiting
m30000| Thu Jun 14 01:42:34 dbexit:
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:42:34 [interruptThread] closing listening socket: 40
m30000| Thu Jun 14 01:42:34 [interruptThread] closing listening socket: 41
m30000| Thu Jun 14 01:42:34 [interruptThread] closing listening socket: 42
m30000| Thu Jun 14 01:42:34 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:42:34 [conn5] end connection 127.0.0.1:48714 (3 connections now open)
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:42:34 [conn9] end connection 127.0.0.1:60140 (5 connections now open)
m30000| Thu Jun 14 01:42:34 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:42:34 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:42:34 dbexit: really exiting now
Thu Jun 14 01:42:35 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:42:35 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:42:35 [interruptThread] now exiting
m30001| Thu Jun 14 01:42:35 dbexit:
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:42:35 [interruptThread] closing listening socket: 43
m30001| Thu Jun 14 01:42:35 [interruptThread] closing listening socket: 44
m30001| Thu Jun 14 01:42:35 [interruptThread] closing listening socket: 45
m30001| Thu Jun 14 01:42:35 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:42:35 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:42:35 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:42:35 dbexit: really exiting now
Thu Jun 14 01:42:36 shell: stopped mongo program on port 30001
*** ShardingTest migrateBig completed successfully in 56.642 seconds ***
56730.926037ms
Thu Jun 14 01:42:36 [initandlisten] connection accepted from 127.0.0.1:34799 #45 (32 connections now open)
*******************************************
Test : migrateMemory.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/migrateMemory.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/migrateMemory.js";TestData.testFile = "migrateMemory.js";TestData.testName = "migrateMemory";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:42:36 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/migrateMemory0'
Thu Jun 14 01:42:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/migrateMemory0
m30000| Thu Jun 14 01:42:36
m30000| Thu Jun 14 01:42:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:42:36
m30000| Thu Jun 14 01:42:36 [initandlisten] MongoDB starting : pid=26824 port=30000 dbpath=/data/db/migrateMemory0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:42:36 [initandlisten]
m30000| Thu Jun 14 01:42:36 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:42:36 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:42:36 [initandlisten]
m30000| Thu Jun 14 01:42:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:42:36 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:42:36 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:42:36 [initandlisten]
m30000| Thu Jun 14 01:42:36 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:42:36 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:42:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:42:36 [initandlisten] options: { dbpath: "/data/db/migrateMemory0", port: 30000 }
m30000| Thu Jun 14 01:42:36 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:42:36 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/migrateMemory1'
m30000| Thu Jun 14 01:42:36 [initandlisten] connection accepted from 127.0.0.1:60145 #1 (1 connection now open)
Thu Jun 14 01:42:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/migrateMemory1
m30001| Thu Jun 14 01:42:36
m30001| Thu Jun 14 01:42:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:42:36
m30001| Thu Jun 14 01:42:36 [initandlisten] MongoDB starting : pid=26837 port=30001 dbpath=/data/db/migrateMemory1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:42:36 [initandlisten]
m30001| Thu Jun 14 01:42:36 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:42:36 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:42:36 [initandlisten]
m30001| Thu Jun 14 01:42:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:42:36 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:42:36 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:42:36 [initandlisten]
m30001| Thu Jun 14 01:42:36 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:42:36 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:42:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:42:36 [initandlisten] options: { dbpath: "/data/db/migrateMemory1", port: 30001 }
m30001| Thu Jun 14 01:42:36 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:42:36 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:42:36 [initandlisten] connection accepted from 127.0.0.1:48722 #1 (1 connection now open)
m30000| Thu Jun 14 01:42:36 [initandlisten] connection accepted from 127.0.0.1:60148 #2 (2 connections now open)
ShardingTest migrateMemory :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:42:36 [FileAllocator] allocating new datafile /data/db/migrateMemory0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:42:36 [FileAllocator] creating directory /data/db/migrateMemory0/_tmp
Thu Jun 14 01:42:36 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30999| Thu Jun 14 01:42:36 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:42:36 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26852 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:42:36 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:42:36 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:42:36 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:42:36 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:42:36 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:36 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:42:36 [initandlisten] connection accepted from 127.0.0.1:60150 #3 (3 connections now open)
m30999| Thu Jun 14 01:42:36 [mongosMain] connected connection!
m30000| Thu Jun 14 01:42:36 [FileAllocator] done allocating datafile /data/db/migrateMemory0/config.ns, size: 16MB, took 0.249 secs
m30000| Thu Jun 14 01:42:36 [FileAllocator] allocating new datafile /data/db/migrateMemory0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:42:37 [FileAllocator] done allocating datafile /data/db/migrateMemory0/config.0, size: 16MB, took 0.27 secs
m30000| Thu Jun 14 01:42:37 [FileAllocator] allocating new datafile /data/db/migrateMemory0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:42:37 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn2] insert config.settings keyUpdates:0 locks(micros) w:545105 545ms
m30999| Thu Jun 14 01:42:37 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [mongosMain] connected connection!
m30000| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:60153 #4 (4 connections now open)
m30000| Thu Jun 14 01:42:37 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [mongosMain] MaxChunkSize: 1
m30000| Thu Jun 14 01:42:37 [conn3] build index config.chunks { _id: 1 }
m30999| Thu Jun 14 01:42:37 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:42:37 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:42:37 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:42:37 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:42:37 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:42:37 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:42:37 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:42:37 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:42:37
m30999| Thu Jun 14 01:42:37 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:42:37 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [Balancer] connected connection!
m30000| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:60154 #5 (5 connections now open)
m30999| Thu Jun 14 01:42:37 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:42:37 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652557:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:42:37 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:42:37 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652557:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652557:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652557:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:42:37 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd979cda3d88055dcf74108" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:42:37 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:42:37 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652557:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:42:37 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652557:1804289383' acquired, ts : 4fd979cda3d88055dcf74108
m30999| Thu Jun 14 01:42:37 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:42:37 [Balancer] no collections to balance
m30999| Thu Jun 14 01:42:37 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:42:37 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:42:37 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652557:1804289383' unlocked.
m30000| Thu Jun 14 01:42:37 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:37 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 1 total records. 0 secs
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:42:37 [mongosMain] connection accepted from 127.0.0.1:54217 #1 (1 connection now open)
m30999| Thu Jun 14 01:42:37 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:42:37 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:42:37 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:42:37 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [conn] connected connection!
m30001| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:48731 #2 (2 connections now open)
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:42:37 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:42:37 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [conn] connected connection!
m30999| Thu Jun 14 01:42:37 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979cda3d88055dcf74107
m30999| Thu Jun 14 01:42:37 [conn] initializing shard connection to localhost:30000
m30000| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:60157 #6 (6 connections now open)
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:42:37 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [conn] connected connection!
m30999| Thu Jun 14 01:42:37 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd979cda3d88055dcf74107
m30999| Thu Jun 14 01:42:37 [conn] initializing shard connection to localhost:30001
m30001| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:48733 #3 (3 connections now open)
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: WriteBackListener-localhost:30001
m30999| Thu Jun 14 01:42:37 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:42:37 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:42:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:37 [conn] connected connection!
m30999| Thu Jun 14 01:42:37 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:42:37 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:42:37 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:42:37 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:42:37 [conn] enable sharding on: test.foo with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:42:37 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd979cda3d88055dcf74109 based on: (empty)
m30999| Thu Jun 14 01:42:37 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:42:37 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0000", shardHost: "localhost:30000" } 0x95bce90
m30000| Thu Jun 14 01:42:37 [conn3] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:42:37 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:37 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:42:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30001| Thu Jun 14 01:42:37 [initandlisten] connection accepted from 127.0.0.1:48734 #4 (4 connections now open)
m30001| Thu Jun 14 01:42:37 [FileAllocator] allocating new datafile /data/db/migrateMemory1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:42:37 [FileAllocator] creating directory /data/db/migrateMemory1/_tmp
m30000| Thu Jun 14 01:42:38 [FileAllocator] done allocating datafile /data/db/migrateMemory0/config.1, size: 32MB, took 0.859 secs
m30001| Thu Jun 14 01:42:38 [FileAllocator] done allocating datafile /data/db/migrateMemory1/test.ns, size: 16MB, took 0.42 secs
m30001| Thu Jun 14 01:42:38 [FileAllocator] allocating new datafile /data/db/migrateMemory1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:42:38 [FileAllocator] done allocating datafile /data/db/migrateMemory1/test.0, size: 16MB, took 0.297 secs
m30001| Thu Jun 14 01:42:38 [conn4] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:42:38 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:42:38 [conn4] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:42:38 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) W:76 r:271 w:1369642 1369ms
m30001| Thu Jun 14 01:42:38 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:64 reslen:173 1367ms
m30001| Thu Jun 14 01:42:38 [FileAllocator] allocating new datafile /data/db/migrateMemory1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:42:38 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 68307 splitThreshold: 921
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 10039 splitThreshold: 921
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 10039 splitThreshold: 921
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:42:38 [initandlisten] connection accepted from 127.0.0.1:60160 #7 (7 connections now open)
m30000| Thu Jun 14 01:42:38 [initandlisten] connection accepted from 127.0.0.1:60161 #8 (8 connections now open)
m30001| Thu Jun 14 01:42:38 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652558:2131115093 (sleeping for 30000ms)
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282beb
m30000| Thu Jun 14 01:42:38 [initandlisten] connection accepted from 127.0.0.1:60162 #9 (9 connections now open)
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|0||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558730), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd979cda3d88055dcf74109 based on: 1|0||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 152341 splitThreshold: 471859
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100390 splitThreshold: 471859
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100390 splitThreshold: 471859
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100390 splitThreshold: 471859
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100390 splitThreshold: 471859
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100390 splitThreshold: 471859
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 53.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bec
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|2||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558745), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 53.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 53.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd979cda3d88055dcf74109 based on: 1|2||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 53.0 } (splitThreshold 471859) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 195376 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 105.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 105.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 105.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd979cda3d88055dcf74109 based on: 1|4||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 53.0 } max: { _id: MaxKey } on: { _id: 173.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 195800 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 225.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 225.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 225.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||4fd979cda3d88055dcf74109 based on: 1|6||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 173.0 } max: { _id: MaxKey } on: { _id: 292.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 193538 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 344.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 344.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 344.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||4fd979cda3d88055dcf74109 based on: 1|8||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 292.0 } max: { _id: MaxKey } on: { _id: 401.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 209864 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 453.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 453.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 453.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||4fd979cda3d88055dcf74109 based on: 1|10||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: 401.0 } max: { _id: MaxKey } on: { _id: 516.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|12, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 197926 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 568.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 568.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 568.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||4fd979cda3d88055dcf74109 based on: 1|12||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: 516.0 } max: { _id: MaxKey } on: { _id: 626.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|14, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 193096 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 678.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 678.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 678.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||4fd979cda3d88055dcf74109 based on: 1|14||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: 626.0 } max: { _id: MaxKey } on: { _id: 738.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|16, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 194770 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 790.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 790.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 790.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||4fd979cda3d88055dcf74109 based on: 1|16||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: 738.0 } max: { _id: MaxKey } on: { _id: 850.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|18, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 210208 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 53.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 53.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 173.0 } ], shardId: "test.foo-_id_53.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bed
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|4||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558763), what: "split", ns: "test.foo", details: { before: { min: { _id: 53.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 53.0 }, max: { _id: 173.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 173.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 173.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 173.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 292.0 } ], shardId: "test.foo-_id_173.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bee
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|6||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558780), what: "split", ns: "test.foo", details: { before: { min: { _id: 173.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 173.0 }, max: { _id: 292.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 292.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 292.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 292.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 401.0 } ], shardId: "test.foo-_id_292.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bef
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|8||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558795), what: "split", ns: "test.foo", details: { before: { min: { _id: 292.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 292.0 }, max: { _id: 401.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 401.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 401.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 401.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 516.0 } ], shardId: "test.foo-_id_401.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf0
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|10||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558812), what: "split", ns: "test.foo", details: { before: { min: { _id: 401.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 401.0 }, max: { _id: 516.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 516.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 516.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 516.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 626.0 } ], shardId: "test.foo-_id_516.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf1
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|12||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558827), what: "split", ns: "test.foo", details: { before: { min: { _id: 516.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 516.0 }, max: { _id: 626.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 626.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 626.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 626.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 738.0 } ], shardId: "test.foo-_id_626.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf2
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|14||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558843), what: "split", ns: "test.foo", details: { before: { min: { _id: 626.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 626.0 }, max: { _id: 738.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 738.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 738.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 738.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 850.0 } ], shardId: "test.foo-_id_738.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf3
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|16||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558858), what: "split", ns: "test.foo", details: { before: { min: { _id: 738.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 738.0 }, max: { _id: 850.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 850.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 902.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 902.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 902.0 }
{ "_id" : "test.foo-_id_MinKey", "lastmod" : Timestamp(1000, 1), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : { $minKey : 1 } }, "max" : { "_id" : 0 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_0.0", "lastmod" : Timestamp(1000, 3), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 0 }, "max" : { "_id" : 53 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_53.0", "lastmod" : Timestamp(1000, 5), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 53 }, "max" : { "_id" : 173 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_173.0", "lastmod" : Timestamp(1000, 7), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 173 }, "max" : { "_id" : 292 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_292.0", "lastmod" : Timestamp(1000, 9), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 292 }, "max" : { "_id" : 401 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_401.0", "lastmod" : Timestamp(1000, 11), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 401 }, "max" : { "_id" : 516 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_516.0", "lastmod" : Timestamp(1000, 13), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 516 }, "max" : { "_id" : 626 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_626.0", "lastmod" : Timestamp(1000, 15), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 626 }, "max" : { "_id" : 738 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_738.0", "lastmod" : Timestamp(1000, 17), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 738 }, "max" : { "_id" : 850 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_850.0", "lastmod" : Timestamp(1000, 19), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 850 }, "max" : { "_id" : 965 }, "shard" : "shard0001" }
{ "_id" : "test.foo-_id_965.0", "lastmod" : Timestamp(1000, 20), "lastmodEpoch" : ObjectId("4fd979cda3d88055dcf74109"), "ns" : "test.foo", "min" : { "_id" : 965 }, "max" : { "_id" : { $maxKey : 1 } }, "shard" : "shard0001" }
from: shard0001 to: shard0000
{ "shard0000" : 0, "shard0001" : 11 }
0
m30000| Thu Jun 14 01:42:38 [FileAllocator] allocating new datafile /data/db/migrateMemory0/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 850.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 850.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 965.0 } ], shardId: "test.foo-_id_850.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf4
m30001| Thu Jun 14 01:42:38 [conn4] splitChunk accepted at version 1|18||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558875), what: "split", ns: "test.foo", details: { before: { min: { _id: 850.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 850.0 }, max: { _id: 965.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') }, right: { min: { _id: 965.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd979cda3d88055dcf74109') } } }
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 965.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 965.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 965.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] request split points lookup for chunk test.foo { : 965.0 } -->> { : MaxKey }
m30001| Thu Jun 14 01:42:38 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:38 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:38 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979cef77fab5bc9282bf5
m30001| Thu Jun 14 01:42:38 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:38-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652558915), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:38 [conn4] moveChunk request accepted at version 1|20||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:38 [conn4] moveChunk number of documents: 53
m30001| Thu Jun 14 01:42:38 [initandlisten] connection accepted from 127.0.0.1:48738 #5 (5 connections now open)
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||4fd979cda3d88055dcf74109 based on: 1|18||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:38 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: 850.0 } max: { _id: MaxKey } on: { _id: 965.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed)
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|20, versionEpoch: ObjectId('4fd979cda3d88055dcf74109'), serverID: ObjectId('4fd979cda3d88055dcf74107'), shard: "shard0001", shardHost: "localhost:30001" } 0x95be798
m30999| Thu Jun 14 01:42:38 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd979cda3d88055dcf74109'), ok: 1.0 }
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { _id: 965.0 } max: { _id: MaxKey } dataWritten: 197265 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { _id: 965.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { _id: 965.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:42:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { _id: 965.0 } max: { _id: MaxKey } dataWritten: 190741 splitThreshold: 943718
m30999| Thu Jun 14 01:42:38 [conn] chunk not full enough to trigger auto-split { _id: 1017.0 }
m30999| Thu Jun 14 01:42:38 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 0.0 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:38 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:39 [FileAllocator] done allocating datafile /data/db/migrateMemory1/test.1, size: 32MB, took 1.02 secs
m30000| Thu Jun 14 01:42:39 [FileAllocator] done allocating datafile /data/db/migrateMemory0/test.ns, size: 16MB, took 0.803 secs
m30000| Thu Jun 14 01:42:39 [FileAllocator] allocating new datafile /data/db/migrateMemory0/test.0, filling with zeroes...
m30001| Thu Jun 14 01:42:39 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:42:40 [FileAllocator] done allocating datafile /data/db/migrateMemory0/test.0, size: 16MB, took 0.341 secs
m30000| Thu Jun 14 01:42:40 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:42:40 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:40 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:42:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 53.0 }
m30000| Thu Jun 14 01:42:40 [FileAllocator] allocating new datafile /data/db/migrateMemory0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:42:40 [FileAllocator] done allocating datafile /data/db/migrateMemory0/test.1, size: 32MB, took 0.738 secs
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 53, clonedBytes: 532067, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk setting version to: 2|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 53.0 }
m30000| Thu Jun 14 01:42:40 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:40-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652560929), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, step1 of 5: 1155, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 848 } }
m30000| Thu Jun 14 01:42:40 [initandlisten] connection accepted from 127.0.0.1:60164 #10 (10 connections now open)
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 53, clonedBytes: 532067, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk updating self version to: 2|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:40-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652560933), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:40 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk deleted: 53
m30001| Thu Jun 14 01:42:40 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:40-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652560938), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2005, step5 of 6: 12, step6 of 6: 4 } }
m30001| Thu Jun 14 01:42:40 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:9040 w:1373998 reslen:37 2025ms
m30999| Thu Jun 14 01:42:40 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 2|1||4fd979cda3d88055dcf74109 based on: 1|20||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:40 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:40 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:40 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:40 [conn] warning: mongos collstats doesn't know about: userFlags
190.54545454545453
m30999| Thu Jun 14 01:42:40 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 190.5454545454545 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:40 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 173.0 } max: { _id: 292.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:40 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 173.0 }, max: { _id: 292.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_173.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:40 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:40 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979d0f77fab5bc9282bf6
m30001| Thu Jun 14 01:42:40 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:40-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652560943), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 173.0 }, max: { _id: 292.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk request accepted at version 2|1||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:40 [conn4] moveChunk number of documents: 119
m30000| Thu Jun 14 01:42:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 173.0 } -> { _id: 292.0 }
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 173.0 }, max: { _id: 292.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 119, clonedBytes: 1194641, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk setting version to: 3|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 173.0 } -> { _id: 292.0 }
m30000| Thu Jun 14 01:42:41 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:41-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652561953), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 173.0 }, max: { _id: 292.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 18, step4 of 5: 0, step5 of 5: 988 } }
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 173.0 }, max: { _id: 292.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 119, clonedBytes: 1194641, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk updating self version to: 3|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:41-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652561957), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 173.0 }, max: { _id: 292.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:41 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk deleted: 119
m30001| Thu Jun 14 01:42:41 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:41-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652561967), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 173.0 }, max: { _id: 292.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 12, step6 of 6: 9 } }
m30001| Thu Jun 14 01:42:41 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 173.0 }, max: { _id: 292.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_173.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:9328 w:1383176 reslen:37 1025ms
m30999| Thu Jun 14 01:42:41 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 3|1||4fd979cda3d88055dcf74109 based on: 2|1||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:41 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:41 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:41 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:41 [conn] warning: mongos collstats doesn't know about: userFlags
381.09090909090907
m30999| Thu Jun 14 01:42:41 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 381.0909090909091 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:41 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|9||000000000000000000000000 min: { _id: 292.0 } max: { _id: 401.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:41 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 292.0 }, max: { _id: 401.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_292.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:41 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:41 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979d1f77fab5bc9282bf7
m30001| Thu Jun 14 01:42:41 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:41-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652561972), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 292.0 }, max: { _id: 401.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk request accepted at version 3|1||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:41 [conn4] moveChunk number of documents: 109
m30000| Thu Jun 14 01:42:41 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 292.0 } -> { _id: 401.0 }
m30001| Thu Jun 14 01:42:42 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 292.0 }, max: { _id: 401.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 109, clonedBytes: 1094251, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:42 [conn4] moveChunk setting version to: 4|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:42 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 292.0 } -> { _id: 401.0 }
m30000| Thu Jun 14 01:42:42 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:42-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652562989), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 292.0 }, max: { _id: 401.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 997 } }
m30001| Thu Jun 14 01:42:42 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 292.0 }, max: { _id: 401.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 109, clonedBytes: 1094251, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:42 [conn4] moveChunk updating self version to: 4|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:42 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:42-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652562993), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 292.0 }, max: { _id: 401.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:42 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:43 [conn4] moveChunk deleted: 109
m30001| Thu Jun 14 01:42:43 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:43-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652563003), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 292.0 }, max: { _id: 401.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1003, step5 of 6: 16, step6 of 6: 9 } }
m30001| Thu Jun 14 01:42:43 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 292.0 }, max: { _id: 401.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_292.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:9611 w:1391570 reslen:37 1031ms
m30999| Thu Jun 14 01:42:43 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:43 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 4|1||4fd979cda3d88055dcf74109 based on: 3|1||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:43 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:43 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:43 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:43 [conn] warning: mongos collstats doesn't know about: userFlags
571.6363636363636
m30999| Thu Jun 14 01:42:43 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 571.6363636363636 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:43 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|13||000000000000000000000000 min: { _id: 516.0 } max: { _id: 626.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:43 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 516.0 }, max: { _id: 626.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_516.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:43 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:43 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979d3f77fab5bc9282bf8
m30001| Thu Jun 14 01:42:43 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:43-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652563007), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 516.0 }, max: { _id: 626.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:43 [conn4] moveChunk request accepted at version 4|1||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:43 [conn4] moveChunk number of documents: 110
m30000| Thu Jun 14 01:42:43 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 516.0 } -> { _id: 626.0 }
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 516.0 }, max: { _id: 626.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 110, clonedBytes: 1104290, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk setting version to: 5|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 516.0 } -> { _id: 626.0 }
m30000| Thu Jun 14 01:42:44 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:44-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652564021), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 516.0 }, max: { _id: 626.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 994 } }
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 516.0 }, max: { _id: 626.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 110, clonedBytes: 1104290, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk updating self version to: 5|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:44-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652564025), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 516.0 }, max: { _id: 626.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:44 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk deleted: 110
m30001| Thu Jun 14 01:42:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:44-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652564035), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 516.0 }, max: { _id: 626.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 16, step6 of 6: 8 } }
m30001| Thu Jun 14 01:42:44 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 516.0 }, max: { _id: 626.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_516.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:9885 w:1399819 reslen:37 1028ms
m30999| Thu Jun 14 01:42:44 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:44 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 5|1||4fd979cda3d88055dcf74109 based on: 4|1||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:44 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:44 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:44 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:44 [conn] warning: mongos collstats doesn't know about: userFlags
762.1818181818181
m30999| Thu Jun 14 01:42:44 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 762.1818181818181 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:44 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|17||000000000000000000000000 min: { _id: 738.0 } max: { _id: 850.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:42:44 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 738.0 }, max: { _id: 850.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_738.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:44 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:44 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979d4f77fab5bc9282bf9
m30001| Thu Jun 14 01:42:44 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:44-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652564039), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 738.0 }, max: { _id: 850.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk request accepted at version 5|1||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:44 [conn4] moveChunk number of documents: 112
m30000| Thu Jun 14 01:42:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 738.0 } -> { _id: 850.0 }
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 738.0 }, max: { _id: 850.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 112, clonedBytes: 1124368, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk setting version to: 6|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 738.0 } -> { _id: 850.0 }
m30000| Thu Jun 14 01:42:45 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:45-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652565045), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 738.0 }, max: { _id: 850.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 18, step4 of 5: 0, step5 of 5: 986 } }
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 738.0 }, max: { _id: 850.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 112, clonedBytes: 1124368, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk updating self version to: 6|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:45 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:45-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652565049), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 738.0 }, max: { _id: 850.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:45 [conn4] doing delete inline
952.7272727272726
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk deleted: 112
m30001| Thu Jun 14 01:42:45 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:45 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:45-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652565059), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 738.0 }, max: { _id: 850.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 8, step6 of 6: 9 } }
m30001| Thu Jun 14 01:42:45 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 738.0 }, max: { _id: 850.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_738.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:10149 w:1408518 reslen:37 1021ms
m30001| Thu Jun 14 01:42:45 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 850.0 }, max: { _id: 965.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_850.0", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:42:45 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:42:45 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' acquired, ts : 4fd979d5f77fab5bc9282bfa
m30001| Thu Jun 14 01:42:45 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:45-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652565064), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 850.0 }, max: { _id: 965.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk request accepted at version 6|1||4fd979cda3d88055dcf74109
m30001| Thu Jun 14 01:42:45 [conn4] moveChunk number of documents: 115
m30999| Thu Jun 14 01:42:45 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:45 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 6|1||4fd979cda3d88055dcf74109 based on: 5|1||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:45 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:45 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:45 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:45 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:45 [conn] CMD: movechunk: { movechunk: "test.foo", find: { _id: 952.7272727272726 }, to: "shard0000" }
m30999| Thu Jun 14 01:42:45 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { _id: 850.0 } max: { _id: 965.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:42:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 850.0 } -> { _id: 965.0 }
m30001| Thu Jun 14 01:42:46 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 850.0 }, max: { _id: 965.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 115, clonedBytes: 1154485, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:42:46 [conn4] moveChunk setting version to: 7|0||4fd979cda3d88055dcf74109
m30000| Thu Jun 14 01:42:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 850.0 } -> { _id: 965.0 }
m30000| Thu Jun 14 01:42:46 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:46-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652566081), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 850.0 }, max: { _id: 965.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 18, step4 of 5: 0, step5 of 5: 996 } }
m30001| Thu Jun 14 01:42:46 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 850.0 }, max: { _id: 965.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 115, clonedBytes: 1154485, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:42:46 [conn4] moveChunk updating self version to: 7|1||4fd979cda3d88055dcf74109 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo'
m30001| Thu Jun 14 01:42:46 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:46-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652566086), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 850.0 }, max: { _id: 965.0 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:42:46 [conn4] doing delete inline
m30001| Thu Jun 14 01:42:46 [conn4] moveChunk deleted: 115
m30001| Thu Jun 14 01:42:46 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652558:2131115093' unlocked.
m30001| Thu Jun 14 01:42:46 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:42:46-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48734", time: new Date(1339652566098), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 850.0 }, max: { _id: 965.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1004, step5 of 6: 16, step6 of 6: 11 } }
m30001| Thu Jun 14 01:42:46 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 850.0 }, max: { _id: 965.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_850.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:10445 w:1419161 reslen:37 1035ms
m30999| Thu Jun 14 01:42:46 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:42:46 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 7|1||4fd979cda3d88055dcf74109 based on: 6|1||4fd979cda3d88055dcf74109
m30999| Thu Jun 14 01:42:46 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:46 [conn] warning: mongos collstats doesn't know about: userFlags
m30999| Thu Jun 14 01:42:46 [conn] warning: mongos collstats doesn't know about: systemFlags
m30999| Thu Jun 14 01:42:46 [conn] warning: mongos collstats doesn't know about: userFlags
{
"bits" : 32,
"resident" : 43,
"virtual" : 163,
"supported" : true,
"mapped" : 32
}
{
"bits" : 32,
"resident" : 43,
"virtual" : 164,
"supported" : true,
"mapped" : 32
}
{
"bits" : 32,
"resident" : 43,
"virtual" : 164,
"supported" : true,
"mapped" : 32
}
{
"bits" : 32,
"resident" : 43,
"virtual" : 164,
"supported" : true,
"mapped" : 32
}
{
"bits" : 32,
"resident" : 43,
"virtual" : 164,
"supported" : true,
"mapped" : 32
}
{
"bits" : 32,
"resident" : 43,
"virtual" : 164,
"supported" : true,
"mapped" : 32
}
m30999| Thu Jun 14 01:42:46 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:42:46 [conn3] end connection 127.0.0.1:60150 (9 connections now open)
m30000| Thu Jun 14 01:42:46 [conn5] end connection 127.0.0.1:60154 (8 connections now open)
m30000| Thu Jun 14 01:42:46 [conn6] end connection 127.0.0.1:60157 (7 connections now open)
m30001| Thu Jun 14 01:42:46 [conn3] end connection 127.0.0.1:48733 (4 connections now open)
m30001| Thu Jun 14 01:42:46 [conn4] end connection 127.0.0.1:48734 (3 connections now open)
Thu Jun 14 01:42:47 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:42:47 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:42:47 [interruptThread] now exiting
m30000| Thu Jun 14 01:42:47 dbexit:
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:42:47 [interruptThread] closing listening socket: 41
m30000| Thu Jun 14 01:42:47 [interruptThread] closing listening socket: 42
m30000| Thu Jun 14 01:42:47 [interruptThread] closing listening socket: 43
m30000| Thu Jun 14 01:42:47 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:42:47 [conn10] end connection 127.0.0.1:60164 (6 connections now open)
m30001| Thu Jun 14 01:42:47 [conn5] end connection 127.0.0.1:48738 (2 connections now open)
m30000| Thu Jun 14 01:42:47 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:42:47 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:42:47 dbexit: really exiting now
Thu Jun 14 01:42:48 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:42:48 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:42:48 [interruptThread] now exiting
m30001| Thu Jun 14 01:42:48 dbexit:
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:42:48 [interruptThread] closing listening socket: 44
m30001| Thu Jun 14 01:42:48 [interruptThread] closing listening socket: 45
m30001| Thu Jun 14 01:42:48 [interruptThread] closing listening socket: 46
m30001| Thu Jun 14 01:42:48 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:42:48 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:42:48 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:42:48 dbexit: really exiting now
Thu Jun 14 01:42:49 shell: stopped mongo program on port 30001
*** ShardingTest migrateMemory completed successfully in 12.9 seconds ***
12960.816145ms
Thu Jun 14 01:42:49 [initandlisten] connection accepted from 127.0.0.1:34821 #46 (33 connections now open)
*******************************************
Test : mongos_no_detect_sharding.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_no_detect_sharding.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_no_detect_sharding.js";TestData.testFile = "mongos_no_detect_sharding.js";TestData.testName = "mongos_no_detect_sharding";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:42:49 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:42:49 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:42:49
m30000| Thu Jun 14 01:42:49 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:42:49
m30000| Thu Jun 14 01:42:49 [initandlisten] MongoDB starting : pid=26902 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:42:49 [initandlisten]
m30000| Thu Jun 14 01:42:49 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:42:49 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:42:49 [initandlisten]
m30000| Thu Jun 14 01:42:49 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:42:49 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:42:49 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:42:49 [initandlisten]
m30000| Thu Jun 14 01:42:49 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:42:49 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:42:49 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:42:49 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:42:49 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:42:49 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:42:49 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m30000| Thu Jun 14 01:42:49 [initandlisten] connection accepted from 127.0.0.1:60167 #1 (1 connection now open)
m29000| Thu Jun 14 01:42:49
m29000| Thu Jun 14 01:42:49 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:42:49
m29000| Thu Jun 14 01:42:49 [initandlisten] MongoDB starting : pid=26914 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:42:49 [initandlisten]
m29000| Thu Jun 14 01:42:49 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:42:49 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:42:49 [initandlisten]
m29000| Thu Jun 14 01:42:49 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:42:49 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:42:49 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:42:49 [initandlisten]
m29000| Thu Jun 14 01:42:49 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:42:49 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:42:49 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:42:49 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:42:49 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:42:49 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:42:49 [websvr] ERROR: addr already in use
"localhost:29000"
m29000| Thu Jun 14 01:42:49 [initandlisten] connection accepted from 127.0.0.1:44290 #1 (1 connection now open)
m29000| Thu Jun 14 01:42:49 [initandlisten] connection accepted from 127.0.0.1:44291 #2 (2 connections now open)
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000
]
}
m29000| Thu Jun 14 01:42:49 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:42:49 [FileAllocator] creating directory /data/db/test-config0/_tmp
Thu Jun 14 01:42:49 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000 -vv
m30999| Thu Jun 14 01:42:49 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:42:49 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26930 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:42:49 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:42:49 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:42:49 [mongosMain] options: { configdb: "localhost:29000", port: 30999, vv: true }
m30999| Thu Jun 14 01:42:49 [mongosMain] config string : localhost:29000
m30999| Thu Jun 14 01:42:49 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:42:49 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:49 [mongosMain] connected connection!
m29000| Thu Jun 14 01:42:49 [initandlisten] connection accepted from 127.0.0.1:44293 #3 (3 connections now open)
m29000| Thu Jun 14 01:42:49 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.248 secs
m29000| Thu Jun 14 01:42:49 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:42:50 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.295 secs
m29000| Thu Jun 14 01:42:50 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:42:50 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn2] insert config.settings keyUpdates:0 locks(micros) w:560445 560ms
m30999| Thu Jun 14 01:42:50 [mongosMain] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44296 #4 (4 connections now open)
m30999| Thu Jun 14 01:42:50 [mongosMain] connected connection!
m29000| Thu Jun 14 01:42:50 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [mongosMain] MaxChunkSize: 50
m29000| Thu Jun 14 01:42:50 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:42:50 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:42:50 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:42:50 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:42:50 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:42:50 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:42:50 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:42:50
m30999| Thu Jun 14 01:42:50 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:42:50 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [Balancer] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44297 #5 (5 connections now open)
m30999| Thu Jun 14 01:42:50 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:42:50 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:42:50 [Balancer] connected connection!
m30999| Thu Jun 14 01:42:50 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: 0
m30999| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: -1
m30999| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: 0
m30999| Thu Jun 14 01:42:50 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30999| Thu Jun 14 01:42:50 [Balancer] inserting initial doc in config.locks for lock balancer
m29000| Thu Jun 14 01:42:50 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652570:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652570:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652570:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:42:50 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd979da2b49c9cd14cf1391" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:42:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652570:1804289383' acquired, ts : 4fd979da2b49c9cd14cf1391
m30999| Thu Jun 14 01:42:50 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:42:50 [Balancer] no collections to balance
m30999| Thu Jun 14 01:42:50 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:42:50 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:42:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652570:1804289383' unlocked.
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:42:50 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339652570:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:42:50 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:42:50 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:42:50 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30999:1339652570:1804289383', sleeping for 30000ms
Thu Jun 14 01:42:50 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:29000 -vv
m30999| Thu Jun 14 01:42:50 [mongosMain] connection accepted from 127.0.0.1:54239 #1 (1 connection now open)
m30998| Thu Jun 14 01:42:50 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:42:50 [mongosMain] MongoS version 2.1.2-pre- starting: pid=26948 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:42:50 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:42:50 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:42:50 [mongosMain] options: { configdb: "localhost:29000", port: 30998, vv: true }
m30998| Thu Jun 14 01:42:50 [mongosMain] config string : localhost:29000
m30998| Thu Jun 14 01:42:50 [mongosMain] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44300 #6 (6 connections now open)
m30998| Thu Jun 14 01:42:50 [mongosMain] connected connection!
m30998| Thu Jun 14 01:42:50 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:42:50 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:42:50 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:42:50 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:42:50 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44301 #7 (7 connections now open)
m30998| Thu Jun 14 01:42:50 [Balancer] connected connection!
m30998| Thu Jun 14 01:42:50 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:42:50 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:42:50
m30998| Thu Jun 14 01:42:50 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:42:50 [Balancer] creating new connection to:localhost:29000
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44302 #8 (8 connections now open)
m30998| Thu Jun 14 01:42:50 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:42:50 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:42:50 [Balancer] connected connection!
m30998| Thu Jun 14 01:42:50 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: -1
m30998| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: 0
m30998| Thu Jun 14 01:42:50 [Balancer] skew from remote server localhost:29000 found: 0
m30998| Thu Jun 14 01:42:50 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30998| Thu Jun 14 01:42:50 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652570:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339652570:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339652570:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:42:50 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd979dafc260dd23719ccb2" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd979da2b49c9cd14cf1391" } }
m30998| Thu Jun 14 01:42:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652570:1804289383' acquired, ts : 4fd979dafc260dd23719ccb2
m30998| Thu Jun 14 01:42:50 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:42:50 [Balancer] no collections to balance
m30998| Thu Jun 14 01:42:50 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:42:50 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:42:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652570:1804289383' unlocked.
m30998| Thu Jun 14 01:42:50 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30998:1339652570:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:42:50 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:42:50 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30998:1339652570:1804289383', sleeping for 30000ms
m30998| Thu Jun 14 01:42:50 [mongosMain] connection accepted from 127.0.0.1:42061 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:42:50 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:42:50 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:42:50 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:50 [conn] put [admin] on: config:localhost:29000
m30999| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:50 [conn] connected connection!
m30000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:60183 #2 (2 connections now open)
m30999| Thu Jun 14 01:42:50 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
m30999| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
Creating unsharded connection...
m30000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:60184 #3 (3 connections now open)
Sharding collection...
m30998| Thu Jun 14 01:42:50 [conn] couldn't find database [test] in config db
m30998| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:42:50 [conn] connected connection!
m30999| Thu Jun 14 01:42:50 [conn] connected connection!
m30999| Thu Jun 14 01:42:50 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979da2b49c9cd14cf1390
m30999| Thu Jun 14 01:42:50 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:42:50 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd979da2b49c9cd14cf1390'), authoritative: true }
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:29000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:60185 #4 (4 connections now open)
m29000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:44307 #9 (9 connections now open)
m30999| Thu Jun 14 01:42:50 [conn] connected connection!
m30999| Thu Jun 14 01:42:50 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd979da2b49c9cd14cf1390
m30999| Thu Jun 14 01:42:50 [conn] initializing shard connection to localhost:29000
m30999| Thu Jun 14 01:42:50 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd979da2b49c9cd14cf1390'), authoritative: true }
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: WriteBackListener-localhost:29000
m30999| Thu Jun 14 01:42:50 [WriteBackListener-localhost:29000] localhost:29000 is not a shard node
m30998| Thu Jun 14 01:42:50 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 0 writeLock: 0
m30998| Thu Jun 14 01:42:50 [conn] put [test] on: shard0000:localhost:30000
m30998| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:42:50 [conn] connected connection!
m30998| Thu Jun 14 01:42:50 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd979dafc260dd23719ccb1
m30000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:60187 #5 (5 connections now open)
m30998| Thu Jun 14 01:42:50 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:42:50 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd979dafc260dd23719ccb1'), authoritative: true }
m30998| Thu Jun 14 01:42:50 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:42:50 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30000| Thu Jun 14 01:42:50 [FileAllocator] allocating new datafile /data/db/test0/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:42:50 [FileAllocator] creating directory /data/db/test0/_tmp
m30998| Thu Jun 14 01:42:50 [conn] found 0 dropped collections and 0 sharded collections for database admin
m30999| Thu Jun 14 01:42:50 [conn] DBConfig unserialize: test { _id: "test", partitioned: false, primary: "shard0000" }
m30999| Thu Jun 14 01:42:50 [conn] found 0 dropped collections and 0 sharded collections for database test
m30999| Thu Jun 14 01:42:50 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:42:50 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:42:50 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:42:50 [conn] connected connection!
m30000| Thu Jun 14 01:42:50 [initandlisten] connection accepted from 127.0.0.1:60188 #6 (6 connections now open)
m29000| Thu Jun 14 01:42:50 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.601 secs
m30000| Thu Jun 14 01:42:51 [FileAllocator] done allocating datafile /data/db/test0/test.ns, size: 16MB, took 0.287 secs
m30000| Thu Jun 14 01:42:51 [FileAllocator] allocating new datafile /data/db/test0/test.0, filling with zeroes...
m30000| Thu Jun 14 01:42:51 [FileAllocator] done allocating datafile /data/db/test0/test.0, size: 16MB, took 0.288 secs
m30000| Thu Jun 14 01:42:51 [FileAllocator] allocating new datafile /data/db/test0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:42:51 [conn5] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:42:51 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:42:51 [conn5] insert test.foo keyUpdates:0 locks(micros) w:919688 919ms
m30999| Thu Jun 14 01:42:51 [conn] CMD: shardcollection: { shardCollection: "test.foo", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:42:51 [conn] enable sharding on: test.foo with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:42:51 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd979db2b49c9cd14cf1392
m30999| Thu Jun 14 01:42:51 [conn] loaded 1 chunks into new chunk manager for test.foo with version 1|0||4fd979db2b49c9cd14cf1392
m30999| Thu Jun 14 01:42:51 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd979db2b49c9cd14cf1392 based on: (empty)
m29000| Thu Jun 14 01:42:51 [conn3] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:42:51 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:42:51 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd979db2b49c9cd14cf1392 manager: 0x8d60dd8
m30999| Thu Jun 14 01:42:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), serverID: ObjectId('4fd979da2b49c9cd14cf1390'), shard: "shard0000", shardHost: "localhost:30000" } 0x8d5e580
m30999| Thu Jun 14 01:42:51 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:42:51 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd979db2b49c9cd14cf1392 manager: 0x8d60dd8
m30999| Thu Jun 14 01:42:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), serverID: ObjectId('4fd979da2b49c9cd14cf1390'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8d5e580
m30999| Thu Jun 14 01:42:51 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:42:51 [conn3] no current chunk manager found for this shard, will initialize
m29000| Thu Jun 14 01:42:51 [initandlisten] connection accepted from 127.0.0.1:44310 #10 (10 connections now open)
Seeing if data gets inserted unsharded...
No splits occur here!
m30000| Thu Jun 14 01:42:51 [initandlisten] connection accepted from 127.0.0.1:60190 #7 (7 connections now open)
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000000'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000000 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 0|0||000000000000000000000000
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12b6'), i: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] warning: reloading config data for test, wanted version 1|0||4fd979db2b49c9cd14cf1392 but currently have version 0|0||000000000000000000000000
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] loaded 1 chunks into new chunk manager for test.foo with version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd979db2b49c9cd14cf1392 based on: (empty)
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database test
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:42:51 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connected connection!
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd979dafc260dd23719ccb1'), authoritative: true }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd979db2b49c9cd14cf1392 manager: 0x99aba70
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), serverID: ObjectId('4fd979dafc260dd23719ccb1'), shard: "shard0000", shardHost: "localhost:30000" } 0x99ab898
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000001'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000001 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12b7'), i: 2.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000002'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000002 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12b8'), i: 3.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000003'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000003 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12b9'), i: 4.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000004'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000004 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ba'), i: 5.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000005'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000005 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12bb'), i: 6.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000006'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000006 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12bc'), i: 7.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000007'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000007 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12bd'), i: 8.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000008'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000008 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12be'), i: 9.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000009'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000009 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12bf'), i: 10.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c0'), i: 11.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c1'), i: 12.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c2'), i: 13.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c3'), i: 14.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c4'), i: 15.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000000f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000000f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c5'), i: 16.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000010'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000010 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c6'), i: 17.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000011'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000011 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c7'), i: 18.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000012'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000012 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c8'), i: 19.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000013'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000013 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12c9'), i: 20.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000014'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000014 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ca'), i: 21.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000015'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000015 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12cb'), i: 22.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000016'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000016 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12cc'), i: 23.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000017'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000017 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12cd'), i: 24.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000018'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000018 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ce'), i: 25.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000019'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000019 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12cf'), i: 26.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d0'), i: 27.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d1'), i: 28.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d2'), i: 29.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d3'), i: 30.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d4'), i: 31.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000001f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000001f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d5'), i: 32.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000020'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000020 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d6'), i: 33.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000021'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000021 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d7'), i: 34.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000022'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000022 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d8'), i: 35.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000023'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000023 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12d9'), i: 36.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000024'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000024 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12da'), i: 37.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000025'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000025 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12db'), i: 38.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000026'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000026 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12dc'), i: 39.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000027'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000027 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12dd'), i: 40.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000028'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000028 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12de'), i: 41.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000029'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000029 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12df'), i: 42.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e0'), i: 43.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e1'), i: 44.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e2'), i: 45.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e3'), i: 46.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e4'), i: 47.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000002f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000002f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e5'), i: 48.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000030'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000030 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e6'), i: 49.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000031'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000031 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e7'), i: 50.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000032'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000032 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e8'), i: 51.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000033'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000033 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12e9'), i: 52.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000034'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000034 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ea'), i: 53.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000035'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000035 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12eb'), i: 54.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000036'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000036 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ec'), i: 55.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000037'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000037 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ed'), i: 56.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000038'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000038 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ee'), i: 57.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000039'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000039 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ef'), i: 58.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f0'), i: 59.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f1'), i: 60.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f2'), i: 61.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f3'), i: 62.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f4'), i: 63.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000003f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000003f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f5'), i: 64.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000040'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000040 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f6'), i: 65.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000041'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000041 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f7'), i: 66.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000042'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000042 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f8'), i: 67.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000043'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000043 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12f9'), i: 68.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000044'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000044 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12fa'), i: 69.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000045'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000045 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12fb'), i: 70.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000046'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000046 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12fc'), i: 71.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000047'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000047 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12fd'), i: 72.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000048'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000048 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12fe'), i: 73.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000049'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000049 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad12ff'), i: 74.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1300'), i: 75.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1301'), i: 76.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1302'), i: 77.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1303'), i: 78.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1304'), i: 79.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000004f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000004f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1305'), i: 80.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000050'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000050 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1306'), i: 81.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000051'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000051 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1307'), i: 82.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000052'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000052 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1308'), i: 83.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000053'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000053 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1309'), i: 84.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000054'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000054 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130a'), i: 85.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000055'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000055 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130b'), i: 86.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000056'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000056 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130c'), i: 87.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000057'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000057 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130d'), i: 88.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000058'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000058 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130e'), i: 89.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000059'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000059 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad130f'), i: 90.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005a'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005a needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1310'), i: 91.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005b'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005b needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1311'), i: 92.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005c'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005c needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1312'), i: 93.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005d'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005d needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1313'), i: 94.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005e'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005e needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1314'), i: 95.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db000000000000005f'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db000000000000005f needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1315'), i: 96.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000060'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000060 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1316'), i: 97.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000061'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000061 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1317'), i: 98.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000062'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000062 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1318'), i: 99.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd979db0000000000000063'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), yourVersion: Timestamp 0|0, yourVersionEpoch: ObjectId('000000000000000000000000'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd979db0000000000000063 needVersion : 1|0||4fd979db2b49c9cd14cf1392 mine : 1|0||4fd979db2b49c9cd14cf1392
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] op: insert len: 62 ns: test.foo{ _id: ObjectId('4fd979db36d9d6e743ad1319'), i: 100.0 }
m30998| Thu Jun 14 01:42:51 [WriteBackListener-localhost:30000] wbl already reloaded config information for version 1|0||4fd979db2b49c9cd14cf1392, at version 1|0||4fd979db2b49c9cd14cf1392
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }
test.foo chunks:
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(1000, 0)
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^test\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "test.foo", lastmod: new Date(1339652571), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd979db2b49c9cd14cf1392') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "test.foo" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:29000]
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), ns: "test.foo", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:42:51 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30998| Thu Jun 14 01:42:51 [conn] ns:test.foo at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||4fd979db2b49c9cd14cf1392]
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:42:51 [conn] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 1|0||4fd979db2b49c9cd14cf1392 manager: 0x99aba70
m30998| Thu Jun 14 01:42:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd979db2b49c9cd14cf1392'), serverID: ObjectId('4fd979dafc260dd23719ccb1'), shard: "shard0000", shardHost: "localhost:30000" } 0x99aa3f8
m30998| Thu Jun 14 01:42:51 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] needed to set remote version on connection to value compatible with [test.foo @ 1|0||4fd979db2b49c9cd14cf1392]
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 1|0||4fd979db2b49c9cd14cf1392", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] finishing over 1 shards
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "test.foo @ 1|0||4fd979db2b49c9cd14cf1392", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:42:51 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||4fd979db2b49c9cd14cf1392", cursor: { _id: ObjectId('4fd979da36d9d6e743ad12b5'), i: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m29000| Thu Jun 14 01:42:51 [conn4] end connection 127.0.0.1:44296 (9 connections now open)
m30000| Thu Jun 14 01:42:51 [conn3] end connection 127.0.0.1:60184 (6 connections now open)
m30000| Thu Jun 14 01:42:51 [conn6] end connection 127.0.0.1:60188 (5 connections now open)
m29000| Thu Jun 14 01:42:51 [conn9] end connection 127.0.0.1:44307 (8 connections now open)
m29000| Thu Jun 14 01:42:51 [conn3] end connection 127.0.0.1:44293 (7 connections now open)
m29000| Thu Jun 14 01:42:51 [conn5] end connection 127.0.0.1:44297 (6 connections now open)
m30000| Thu Jun 14 01:42:51 [FileAllocator] done allocating datafile /data/db/test0/test.1, size: 32MB, took 0.526 secs
Thu Jun 14 01:42:52 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:42:52 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:42:52 [conn6] end connection 127.0.0.1:44300 (5 connections now open)
m29000| Thu Jun 14 01:42:52 [conn7] end connection 127.0.0.1:44301 (4 connections now open)
m29000| Thu Jun 14 01:42:52 [conn8] end connection 127.0.0.1:44302 (3 connections now open)
m30000| Thu Jun 14 01:42:52 [conn5] end connection 127.0.0.1:60187 (4 connections now open)
m30000| Thu Jun 14 01:42:52 [conn7] end connection 127.0.0.1:60190 (3 connections now open)
Thu Jun 14 01:42:53 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:42:53 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:42:53 [interruptThread] now exiting
m30000| Thu Jun 14 01:42:53 dbexit:
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:42:53 [interruptThread] closing listening socket: 42
m30000| Thu Jun 14 01:42:53 [interruptThread] closing listening socket: 43
m30000| Thu Jun 14 01:42:53 [interruptThread] closing listening socket: 44
m30000| Thu Jun 14 01:42:53 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:42:53 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:42:53 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:42:53 dbexit: really exiting now
m29000| Thu Jun 14 01:42:53 [conn10] end connection 127.0.0.1:44310 (2 connections now open)
Thu Jun 14 01:42:54 shell: stopped mongo program on port 30000
m29000| Thu Jun 14 01:42:54 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:42:54 [interruptThread] now exiting
m29000| Thu Jun 14 01:42:54 dbexit:
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:42:54 [interruptThread] closing listening socket: 45
m29000| Thu Jun 14 01:42:54 [interruptThread] closing listening socket: 46
m29000| Thu Jun 14 01:42:54 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:42:54 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:42:54 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:42:54 dbexit: really exiting now
Thu Jun 14 01:42:55 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 6.309 seconds ***
6384.038925ms
Thu Jun 14 01:42:55 [initandlisten] connection accepted from 127.0.0.1:34847 #47 (34 connections now open)
*******************************************
Test : mongos_no_replica_set_refresh.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_no_replica_set_refresh.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_no_replica_set_refresh.js";TestData.testFile = "mongos_no_replica_set_refresh.js";TestData.testName = "mongos_no_replica_set_refresh";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:42:55 2012
MongoDB shell version: 2.1.2-pre-
null
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 0,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-0'
Thu Jun 14 01:42:55 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Thu Jun 14 01:42:55
m31100| Thu Jun 14 01:42:55 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Thu Jun 14 01:42:55
m31100| Thu Jun 14 01:42:55 [initandlisten] MongoDB starting : pid=26989 port=31100 dbpath=/data/db/test-rs0-0 32-bit host=domU-12-31-39-01-70-B4
m31100| Thu Jun 14 01:42:55 [initandlisten]
m31100| Thu Jun 14 01:42:55 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Thu Jun 14 01:42:55 [initandlisten] ** Not recommended for production.
m31100| Thu Jun 14 01:42:55 [initandlisten]
m31100| Thu Jun 14 01:42:55 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Thu Jun 14 01:42:55 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Thu Jun 14 01:42:55 [initandlisten] ** with --journal, the limit is lower
m31100| Thu Jun 14 01:42:55 [initandlisten]
m31100| Thu Jun 14 01:42:55 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Thu Jun 14 01:42:55 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Thu Jun 14 01:42:55 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31100| Thu Jun 14 01:42:55 [initandlisten] options: { dbpath: "/data/db/test-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "test-rs0", rest: true, smallfiles: true }
m31100| Thu Jun 14 01:42:55 [initandlisten] waiting for connections on port 31100
m31100| Thu Jun 14 01:42:55 [websvr] admin web console waiting for connections on port 32100
m31100| Thu Jun 14 01:42:55 [initandlisten] connection accepted from 10.255.119.66:48282 #1 (1 connection now open)
m31100| Thu Jun 14 01:42:55 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Thu Jun 14 01:42:55 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to domU-12-31-39-01-70-B4:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 1,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-1'
m31100| Thu Jun 14 01:42:55 [initandlisten] connection accepted from 127.0.0.1:60034 #2 (2 connections now open)
Thu Jun 14 01:42:55 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Thu Jun 14 01:42:55
m31101| Thu Jun 14 01:42:55 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Thu Jun 14 01:42:55
m31101| Thu Jun 14 01:42:55 [initandlisten] MongoDB starting : pid=27005 port=31101 dbpath=/data/db/test-rs0-1 32-bit host=domU-12-31-39-01-70-B4
m31101| Thu Jun 14 01:42:55 [initandlisten]
m31101| Thu Jun 14 01:42:55 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Thu Jun 14 01:42:55 [initandlisten] ** Not recommended for production.
m31101| Thu Jun 14 01:42:55 [initandlisten]
m31101| Thu Jun 14 01:42:55 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Thu Jun 14 01:42:55 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Thu Jun 14 01:42:55 [initandlisten] ** with --journal, the limit is lower
m31101| Thu Jun 14 01:42:55 [initandlisten]
m31101| Thu Jun 14 01:42:55 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Thu Jun 14 01:42:55 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Thu Jun 14 01:42:55 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31101| Thu Jun 14 01:42:55 [initandlisten] options: { dbpath: "/data/db/test-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "test-rs0", rest: true, smallfiles: true }
m31101| Thu Jun 14 01:42:55 [websvr] admin web console waiting for connections on port 32101
m31101| Thu Jun 14 01:42:55 [initandlisten] waiting for connections on port 31101
m31101| Thu Jun 14 01:42:55 [initandlisten] connection accepted from 10.255.119.66:34626 #1 (1 connection now open)
m31101| Thu Jun 14 01:42:55 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Thu Jun 14 01:42:55 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Thu Jun 14 01:42:56 [initandlisten] connection accepted from 127.0.0.1:48210 #2 (2 connections now open)
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "test-rs0",
"dbpath" : "$set-$node",
"useHostname" : true,
"noJournalPrealloc" : undefined,
"pathOpts" : {
"testName" : "test",
"shard" : 0,
"node" : 2,
"set" : "test-rs0"
},
"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/test-rs0-2'
Thu Jun 14 01:42:56 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Thu Jun 14 01:42:56
m31102| Thu Jun 14 01:42:56 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Thu Jun 14 01:42:56
m31102| Thu Jun 14 01:42:56 [initandlisten] MongoDB starting : pid=27021 port=31102 dbpath=/data/db/test-rs0-2 32-bit host=domU-12-31-39-01-70-B4
m31102| Thu Jun 14 01:42:56 [initandlisten]
m31102| Thu Jun 14 01:42:56 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Thu Jun 14 01:42:56 [initandlisten] ** Not recommended for production.
m31102| Thu Jun 14 01:42:56 [initandlisten]
m31102| Thu Jun 14 01:42:56 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Thu Jun 14 01:42:56 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Thu Jun 14 01:42:56 [initandlisten] ** with --journal, the limit is lower
m31102| Thu Jun 14 01:42:56 [initandlisten]
m31102| Thu Jun 14 01:42:56 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Thu Jun 14 01:42:56 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Thu Jun 14 01:42:56 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m31102| Thu Jun 14 01:42:56 [initandlisten] options: { dbpath: "/data/db/test-rs0-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "test-rs0", rest: true, smallfiles: true }
m31102| Thu Jun 14 01:42:56 [websvr] admin web console waiting for connections on port 32102
m31102| Thu Jun 14 01:42:56 [initandlisten] waiting for connections on port 31102
m31102| Thu Jun 14 01:42:56 [initandlisten] connection accepted from 10.255.119.66:54550 #1 (1 connection now open)
m31102| Thu Jun 14 01:42:56 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Thu Jun 14 01:42:56 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to domU-12-31-39-01-70-B4:31100,
connection to domU-12-31-39-01-70-B4:31101,
connection to domU-12-31-39-01-70-B4:31102
]
{
"replSetInitiate" : {
"_id" : "test-rs0",
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
}
m31102| Thu Jun 14 01:42:56 [initandlisten] connection accepted from 127.0.0.1:51158 #2 (2 connections now open)
m31100| Thu Jun 14 01:42:56 [conn2] replSet replSetInitiate admin command received from client
m31100| Thu Jun 14 01:42:56 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Thu Jun 14 01:42:56 [initandlisten] connection accepted from 10.255.119.66:34631 #3 (3 connections now open)
m31102| Thu Jun 14 01:42:56 [initandlisten] connection accepted from 10.255.119.66:54553 #3 (3 connections now open)
m31100| Thu Jun 14 01:42:56 [conn2] replSet replSetInitiate all members seem up
m31100| Thu Jun 14 01:42:56 [conn2] ******
m31100| Thu Jun 14 01:42:56 [conn2] creating replication oplog of size: 40MB...
m31100| Thu Jun 14 01:42:56 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.ns, filling with zeroes...
m31100| Thu Jun 14 01:42:56 [FileAllocator] creating directory /data/db/test-rs0-0/_tmp
m31100| Thu Jun 14 01:42:56 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.ns, size: 16MB, took 0.229 secs
m31100| Thu Jun 14 01:42:56 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.0, filling with zeroes...
m31100| Thu Jun 14 01:42:57 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.0, size: 64MB, took 1.265 secs
m31100| Thu Jun 14 01:42:57 [conn2] ******
m31100| Thu Jun 14 01:42:57 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:42:57 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:42:57 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Thu Jun 14 01:42:57 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "test-rs0", members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" }, { _id: 2.0, host: "domU-12-31-39-01-70-B4:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1536761 w:35 reslen:112 1537ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31100| Thu Jun 14 01:43:05 [rsStart] replSet I am domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:05 [rsStart] replSet STARTUP2
m31100| Thu Jun 14 01:43:05 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31100| Thu Jun 14 01:43:05 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:43:05 [rsSync] replSet SECONDARY
m31101| Thu Jun 14 01:43:05 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:05 [initandlisten] connection accepted from 10.255.119.66:48292 #3 (3 connections now open)
m31101| Thu Jun 14 01:43:05 [rsStart] replSet I am domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:43:05 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Thu Jun 14 01:43:05 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:43:05 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.ns, filling with zeroes...
m31101| Thu Jun 14 01:43:05 [FileAllocator] creating directory /data/db/test-rs0-1/_tmp
m31102| Thu Jun 14 01:43:06 [rsStart] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:06 [initandlisten] connection accepted from 10.255.119.66:48293 #4 (4 connections now open)
m31102| Thu Jun 14 01:43:06 [rsStart] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:43:06 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Thu Jun 14 01:43:06 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:43:06 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.ns, filling with zeroes...
m31102| Thu Jun 14 01:43:06 [FileAllocator] creating directory /data/db/test-rs0-2/_tmp
m31101| Thu Jun 14 01:43:06 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.ns, size: 16MB, took 0.276 secs
m31101| Thu Jun 14 01:43:06 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.0, filling with zeroes...
m31102| Thu Jun 14 01:43:06 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.ns, size: 16MB, took 0.581 secs
m31101| Thu Jun 14 01:43:06 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.0, size: 16MB, took 0.573 secs
m31102| Thu Jun 14 01:43:06 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.0, filling with zeroes...
m31101| Thu Jun 14 01:43:07 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:43:07 [rsStart] replSet STARTUP2
m31101| Thu Jun 14 01:43:07 [rsSync] ******
m31102| Thu Jun 14 01:43:07 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.0, size: 16MB, took 0.24 secs
m31101| Thu Jun 14 01:43:07 [rsSync] creating replication oplog of size: 40MB...
m31101| Thu Jun 14 01:43:07 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.1, filling with zeroes...
m31100| Thu Jun 14 01:43:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31100| Thu Jun 14 01:43:07 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
m31101| Thu Jun 14 01:43:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:43:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31102| Thu Jun 14 01:43:07 [initandlisten] connection accepted from 10.255.119.66:54556 #4 (4 connections now open)
m31101| Thu Jun 14 01:43:07 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31102| Thu Jun 14 01:43:08 [rsStart] replSet saveConfigLocally done
m31101| Thu Jun 14 01:43:08 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.1, size: 64MB, took 1.088 secs
m31102| Thu Jun 14 01:43:08 [rsStart] replSet STARTUP2
m31102| Thu Jun 14 01:43:08 [rsSync] ******
m31102| Thu Jun 14 01:43:08 [rsSync] creating replication oplog of size: 40MB...
m31102| Thu Jun 14 01:43:08 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.1, filling with zeroes...
m31102| Thu Jun 14 01:43:09 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.1, size: 64MB, took 1.174 secs
m31101| Thu Jun 14 01:43:09 [rsSync] ******
m31101| Thu Jun 14 01:43:09 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:43:09 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Thu Jun 14 01:43:09 [rsSync] ******
m31102| Thu Jun 14 01:43:09 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:43:09 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Thu Jun 14 01:43:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31100| Thu Jun 14 01:43:09 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31101| Thu Jun 14 01:43:09 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state STARTUP2
m31102| Thu Jun 14 01:43:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:43:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state SECONDARY
m31101| Thu Jun 14 01:43:10 [initandlisten] connection accepted from 10.255.119.66:34636 #4 (4 connections now open)
m31102| Thu Jun 14 01:43:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:43:10 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state STARTUP2
m31101| Thu Jun 14 01:43:15 [conn3] replSet RECOVERING
m31101| Thu Jun 14 01:43:15 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31102| Thu Jun 14 01:43:15 [conn3] replSet RECOVERING
m31102| Thu Jun 14 01:43:15 [conn3] replSet info voting yea for domU-12-31-39-01-70-B4:31100 (0)
m31100| Thu Jun 14 01:43:15 [rsMgr] replSet info electSelf 0
m31100| Thu Jun 14 01:43:15 [rsMgr] replSet PRIMARY
m31100| Thu Jun 14 01:43:15 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:43:15 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:43:15 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
m31100| Thu Jun 14 01:43:16 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.ns, size: 16MB, took 0.265 secs
m31100| Thu Jun 14 01:43:16 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:43:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31102| Thu Jun 14 01:43:16 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31100| Thu Jun 14 01:43:16 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.0, size: 16MB, took 0.308 secs
m31100| Thu Jun 14 01:43:16 [conn2] build index admin.foo { _id: 1 }
m31100| Thu Jun 14 01:43:16 [conn2] build index done. scanned 0 total records. 0.05 secs
m31100| Thu Jun 14 01:43:16 [conn2] insert admin.foo keyUpdates:0 locks(micros) W:1536761 w:634315 634ms
ReplSetTest Timestamp(1339652596000, 1)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
Thu Jun 14 01:43:17 [clientcursormon] mem (MB) res:16 virt:142 mapped:0
m31100| Thu Jun 14 01:43:17 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state RECOVERING
m31100| Thu Jun 14 01:43:17 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state RECOVERING
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31101| Thu Jun 14 01:43:19 [conn3] end connection 10.255.119.66:34631 (3 connections now open)
m31101| Thu Jun 14 01:43:19 [initandlisten] connection accepted from 10.255.119.66:34637 #5 (4 connections now open)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:43:21 [conn3] end connection 10.255.119.66:48292 (3 connections now open)
m31100| Thu Jun 14 01:43:21 [initandlisten] connection accepted from 10.255.119.66:48297 #5 (4 connections now open)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31100| Thu Jun 14 01:43:24 [initandlisten] connection accepted from 10.255.119.66:48298 #6 (5 connections now open)
m31100| Thu Jun 14 01:43:24 [conn4] end connection 10.255.119.66:48293 (4 connections now open)
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31101 to have an oplog built.
ReplSetTest waiting for connection to domU-12-31-39-01-70-B4:31102 to have an oplog built.
m31101| Thu Jun 14 01:43:25 [rsSync] replSet initial sync pending
m31101| Thu Jun 14 01:43:25 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:25 [initandlisten] connection accepted from 10.255.119.66:48299 #7 (5 connections now open)
m31101| Thu Jun 14 01:43:25 [rsSync] build index local.me { _id: 1 }
m31101| Thu Jun 14 01:43:25 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:43:25 [rsSync] replSet initial sync drop all databases
m31101| Thu Jun 14 01:43:25 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Thu Jun 14 01:43:25 [rsSync] replSet initial sync clone all databases
m31101| Thu Jun 14 01:43:25 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:43:25 [initandlisten] connection accepted from 10.255.119.66:48300 #8 (6 connections now open)
m31102| Thu Jun 14 01:43:25 [rsSync] replSet initial sync pending
m31102| Thu Jun 14 01:43:25 [rsSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:25 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.ns, filling with zeroes...
m31100| Thu Jun 14 01:43:25 [initandlisten] connection accepted from 10.255.119.66:48301 #9 (7 connections now open)
m31102| Thu Jun 14 01:43:25 [rsSync] build index local.me { _id: 1 }
m31102| Thu Jun 14 01:43:25 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:43:25 [rsSync] replSet initial sync drop all databases
m31102| Thu Jun 14 01:43:25 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Thu Jun 14 01:43:25 [rsSync] replSet initial sync clone all databases
m31102| Thu Jun 14 01:43:25 [rsSync] replSet initial sync cloning db: admin
m31100| Thu Jun 14 01:43:25 [initandlisten] connection accepted from 10.255.119.66:48302 #10 (8 connections now open)
m31102| Thu Jun 14 01:43:25 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.ns, filling with zeroes...
m31101| Thu Jun 14 01:43:25 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.ns, size: 16MB, took 0.367 secs
m31101| Thu Jun 14 01:43:25 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.0, filling with zeroes...
m31102| Thu Jun 14 01:43:25 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.ns, size: 16MB, took 0.505 secs
m31102| Thu Jun 14 01:43:25 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.0, filling with zeroes...
m31101| Thu Jun 14 01:43:26 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.0, size: 16MB, took 0.492 secs
m31101| Thu Jun 14 01:43:26 [rsSync] build index admin.foo { _id: 1 }
m31101| Thu Jun 14 01:43:26 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Thu Jun 14 01:43:26 [rsSync] build index done. scanned 1 total records. 0 secs
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync data copy, starting syncup
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync building indexes
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:43:26 [conn8] end connection 10.255.119.66:48300 (7 connections now open)
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48303 #11 (8 connections now open)
m31100| Thu Jun 14 01:43:26 [conn11] end connection 10.255.119.66:48303 (7 connections now open)
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync query minValid
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync finishing up
m31101| Thu Jun 14 01:43:26 [rsSync] replSet set minValid=4fd979f4:1
m31101| Thu Jun 14 01:43:26 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Thu Jun 14 01:43:26 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Thu Jun 14 01:43:26 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:43:26 [conn7] end connection 10.255.119.66:48299 (6 connections now open)
m31102| Thu Jun 14 01:43:26 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.0, size: 16MB, took 0.651 secs
m31102| Thu Jun 14 01:43:26 [rsSync] build index admin.foo { _id: 1 }
m31102| Thu Jun 14 01:43:26 [rsSync] fastBuildIndex dupsToDrop:0
m31102| Thu Jun 14 01:43:26 [rsSync] build index done. scanned 1 total records. 0 secs
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync data copy, starting syncup
m31100| Thu Jun 14 01:43:26 [conn10] end connection 10.255.119.66:48302 (5 connections now open)
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync building indexes
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48304 #12 (6 connections now open)
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync query minValid
m31100| Thu Jun 14 01:43:26 [conn12] end connection 10.255.119.66:48304 (5 connections now open)
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync finishing up
m31102| Thu Jun 14 01:43:26 [rsSync] replSet set minValid=4fd979f4:1
m31102| Thu Jun 14 01:43:26 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Thu Jun 14 01:43:26 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Thu Jun 14 01:43:26 [rsSync] replSet initial sync done
m31100| Thu Jun 14 01:43:26 [conn9] end connection 10.255.119.66:48301 (4 connections now open)
{
"ts" : Timestamp(1339652596000, 1),
"h" : NumberLong("5580269124417749009"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd979f3eacf3a8e277dcec1"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31101 is 1339652596000:1 and latest is 1339652596000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31101 is 1
{
"ts" : Timestamp(1339652596000, 1),
"h" : NumberLong("5580269124417749009"),
"op" : "i",
"ns" : "admin.foo",
"o" : {
"_id" : ObjectId("4fd979f3eacf3a8e277dcec1"),
"x" : 1
}
}
ReplSetTest await TS for connection to domU-12-31-39-01-70-B4:31102 is 1339652596000:1 and latest is 1339652596000:1
ReplSetTest await oplog size for connection to domU-12-31-39-01-70-B4:31102 is 1
ReplSetTest await synced=true
Thu Jun 14 01:43:26 starting new replica set monitor for replica set test-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:43:26 successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set test-rs0
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48305 #13 (5 connections now open)
Thu Jun 14 01:43:26 changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/
Thu Jun 14 01:43:26 trying to add new host domU-12-31-39-01-70-B4:31100 to replica set test-rs0
Thu Jun 14 01:43:26 successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set test-rs0
Thu Jun 14 01:43:26 trying to add new host domU-12-31-39-01-70-B4:31101 to replica set test-rs0
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48306 #14 (6 connections now open)
m31101| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:34648 #6 (5 connections now open)
Thu Jun 14 01:43:26 successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set test-rs0
Thu Jun 14 01:43:26 trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m31102| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:54570 #5 (5 connections now open)
Thu Jun 14 01:43:26 successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48309 #15 (7 connections now open)
m31100| Thu Jun 14 01:43:26 [conn13] end connection 10.255.119.66:48305 (6 connections now open)
Thu Jun 14 01:43:26 Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:34651 #7 (6 connections now open)
m31102| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:54573 #6 (6 connections now open)
Thu Jun 14 01:43:26 replica set monitor for replica set test-rs0 started, address is test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:43:26 [ReplicaSetMonitorWatcher] starting
m31100| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:48312 #16 (7 connections now open)
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:43:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m29000| Thu Jun 14 01:43:26
m29000| Thu Jun 14 01:43:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:43:26
m29000| Thu Jun 14 01:43:26 [initandlisten] MongoDB starting : pid=27113 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:43:26 [initandlisten]
m29000| Thu Jun 14 01:43:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:43:26 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:43:26 [initandlisten]
m29000| Thu Jun 14 01:43:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:43:26 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:43:26 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:43:26 [initandlisten]
m29000| Thu Jun 14 01:43:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:43:26 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:43:26 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:43:26 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:43:26 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:43:26 [websvr] admin web console waiting for connections on port 30000
"domU-12-31-39-01-70-B4:29000"
m29000| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 127.0.0.1:44346 #1 (1 connection now open)
ShardingTest test :
{
"config" : "domU-12-31-39-01-70-B4:29000",
"shards" : [
connection to test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
]
}
Thu Jun 14 01:43:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb domU-12-31-39-01-70-B4:29000 -vv
m29000| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:34655 #2 (2 connections now open)
m29000| Thu Jun 14 01:43:26 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:43:26 [FileAllocator] creating directory /data/db/test-config0/_tmp
m30999| Thu Jun 14 01:43:26 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:43:26 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27128 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:43:26 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:43:26 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:43:26 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30999, vv: true }
m30999| Thu Jun 14 01:43:26 [mongosMain] config string : domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:26 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:26 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:26 [mongosMain] connected connection!
m29000| Thu Jun 14 01:43:26 [initandlisten] connection accepted from 10.255.119.66:34657 #3 (3 connections now open)
m29000| Thu Jun 14 01:43:26 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.25 secs
m29000| Thu Jun 14 01:43:26 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m31101| Thu Jun 14 01:43:27 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48319 #17 (8 connections now open)
m31102| Thu Jun 14 01:43:27 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48321 #18 (9 connections now open)
m29000| Thu Jun 14 01:43:27 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.253 secs
m29000| Thu Jun 14 01:43:27 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn2] insert config.settings keyUpdates:0 locks(micros) w:524107 524ms
m30999| Thu Jun 14 01:43:27 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [mongosMain] connected connection!
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34662 #4 (4 connections now open)
m29000| Thu Jun 14 01:43:27 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:43:27 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:27 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:43:27 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:43:27 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:43:27 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:43:27 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:43:27 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: PeriodicTask::Runner
m29000| Thu Jun 14 01:43:27 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:43:27 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34663 #5 (5 connections now open)
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:27 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:43:27 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:43:27
m30999| Thu Jun 14 01:43:27 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:27 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [Balancer] connected connection!
m29000| Thu Jun 14 01:43:27 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:43:27 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:27 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:27 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: -1
m30999| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30999| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30999| Thu Jun 14 01:43:27 [Balancer] total clock skew of 0ms for servers domU-12-31-39-01-70-B4:29000 is in 30000ms bounds.
m30999| Thu Jun 14 01:43:27 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:43:27 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652607:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652607:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652607:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:43:27 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd979ff60d677edb2382470" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:43:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652607:1804289383' acquired, ts : 4fd979ff60d677edb2382470
m30999| Thu Jun 14 01:43:27 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:43:27 [Balancer] no collections to balance
m30999| Thu Jun 14 01:43:27 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:43:27 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:43:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652607:1804289383' unlocked.
m30999| Thu Jun 14 01:43:27 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30999:1339652607:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:43:27 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:27 [LockPinger] cluster domU-12-31-39-01-70-B4:29000 pinged successfully at Thu Jun 14 01:43:27 2012 by distributed lock pinger 'domU-12-31-39-01-70-B4:29000/domU-12-31-39-01-70-B4:30999:1339652607:1804289383', sleeping for 30000ms
m29000| Thu Jun 14 01:43:27 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 1 total records. 0 secs
Thu Jun 14 01:43:27 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb domU-12-31-39-01-70-B4:29000 -vv
m30998| Thu Jun 14 01:43:27 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:43:27 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27150 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:43:27 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:43:27 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:43:27 [mongosMain] options: { configdb: "domU-12-31-39-01-70-B4:29000", port: 30998, vv: true }
m30998| Thu Jun 14 01:43:27 [mongosMain] config string : domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:43:27 [mongosMain] creating new connection to:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 [mongosMain] connection accepted from 127.0.0.1:54297 #1 (1 connection now open)
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:43:27 [mongosMain] connected connection!
m30998| Thu Jun 14 01:43:27 [mongosMain] MaxChunkSize: 50
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34666 #6 (6 connections now open)
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:43:27 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:43:27 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:43:27 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:43:27 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:43:27 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:43:27 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34667 #7 (7 connections now open)
m30998| Thu Jun 14 01:43:27 [Balancer] connected connection!
m30998| Thu Jun 14 01:43:27 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:43:27 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:43:27
m30998| Thu Jun 14 01:43:27 [Balancer] created new distributed lock for balancer on domU-12-31-39-01-70-B4:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:43:27 [Balancer] creating new connection to:domU-12-31-39-01-70-B4:29000
m30998| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34668 #8 (8 connections now open)
m30998| Thu Jun 14 01:43:27 [Balancer] connected connection!
m30998| Thu Jun 14 01:43:27 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30998| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30998| Thu Jun 14 01:43:27 [Balancer] skew from remote server domU-12-31-39-01-70-B4:29000 found: 0
m30998| Thu Jun 14 01:43:27 [Balancer] total clock skew of 0ms for servers domU-12-31-39-01-70-B4:29000 is in 30000ms bounds.
m30998| Thu Jun 14 01:43:27 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652607:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339652607:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339652607:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:43:27 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd979ff299ed76f9a53e29c" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd979ff60d677edb2382470" } }
m30998| Thu Jun 14 01:43:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652607:1804289383' acquired, ts : 4fd979ff299ed76f9a53e29c
m30998| Thu Jun 14 01:43:27 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:43:27 [Balancer] no collections to balance
m30998| Thu Jun 14 01:43:27 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:43:27 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:43:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652607:1804289383' unlocked.
m30998| Thu Jun 14 01:43:27 [LockPinger] creating distributed lock ping thread for domU-12-31-39-01-70-B4:29000 and process domU-12-31-39-01-70-B4:30998:1339652607:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:43:27 [LockPinger] cluster domU-12-31-39-01-70-B4:29000 pinged successfully at Thu Jun 14 01:43:27 2012 by distributed lock pinger 'domU-12-31-39-01-70-B4:29000/domU-12-31-39-01-70-B4:30998:1339652607:1804289383', sleeping for 30000ms
m31101| Thu Jun 14 01:43:27 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48329 #19 (10 connections now open)
m31101| Thu Jun 14 01:43:27 [rsSync] replSet SECONDARY
m31102| Thu Jun 14 01:43:27 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48330 #20 (11 connections now open)
m31102| Thu Jun 14 01:43:27 [rsSync] replSet SECONDARY
ShardingTest undefined going to add shard : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30998| Thu Jun 14 01:43:27 [mongosMain] connection accepted from 127.0.0.1:42121 #1 (1 connection now open)
m30999| Thu Jun 14 01:43:27 [conn] couldn't find database [admin] in config db
m29000| Thu Jun 14 01:43:27 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:43:27 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:27 [conn] put [admin] on: config:domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 [conn] starting new replica set monitor for replica set test-rs0 with seed of domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] successfully connected to seed domU-12-31-39-01-70-B4:31100 for replica set test-rs0
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48332 #21 (12 connections now open)
m30999| Thu Jun 14 01:43:27 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652607532), ok: 1.0 }
m30999| Thu Jun 14 01:43:27 [conn] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/
m30999| Thu Jun 14 01:43:27 [conn] trying to add new host domU-12-31-39-01-70-B4:31100 to replica set test-rs0
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48333 #22 (13 connections now open)
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31100 in replica set test-rs0
m30999| Thu Jun 14 01:43:27 [conn] trying to add new host domU-12-31-39-01-70-B4:31101 to replica set test-rs0
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31101 in replica set test-rs0
m30999| Thu Jun 14 01:43:27 [conn] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m31101| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34675 #8 (7 connections now open)
m31102| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:54597 #7 (7 connections now open)
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m30999| Thu Jun 14 01:43:27 [conn] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] connected connection!
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48336 #23 (14 connections now open)
m31100| Thu Jun 14 01:43:27 [conn21] end connection 10.255.119.66:48332 (13 connections now open)
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[1].ok = false domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] replicaSetChange: shard not found for set: test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] _check : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652607535), ok: 1.0 }
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[1].ok = false domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] Primary for replica set test-rs0 changed to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652607536), ok: 1.0 }
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[1].ok = false domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31100" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339652607539), ok: 1.0 }
m30999| Thu Jun 14 01:43:27 [conn] creating new connection to:domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] connected connection!
m31101| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34678 #9 (8 connections now open)
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31102", maxBsonObjectSize: 16777216, localTime: new Date(1339652607540), ok: 1.0 }
m30999| Thu Jun 14 01:43:27 [conn] creating new connection to:domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] connected connection!
m31102| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:54600 #8 (8 connections now open)
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[1].ok = false domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 [conn] dbclient_rs nodes[2].ok = true domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] replica set monitor for replica set test-rs0 started, address is test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ReplicaSetMonitorWatcher
m30999| Thu Jun 14 01:43:27 [ReplicaSetMonitorWatcher] starting
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48339 #24 (14 connections now open)
m30999| Thu Jun 14 01:43:27 [conn] going to add shard: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }
{ "shardAdded" : "test-rs0", "ok" : 1 }
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48340 #25 (15 connections now open)
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31100 serverID: 4fd979ff60d677edb238246f
m30999| Thu Jun 14 01:43:27 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31101 serverID: 4fd979ff60d677edb238246f
m30999| Thu Jun 14 01:43:27 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:31102 serverID: 4fd979ff60d677edb238246f
m30999| Thu Jun 14 01:43:27 [conn] initializing shard connection to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd979ff60d677edb238246f'), authoritative: true }
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] creating new connection to:domU-12-31-39-01-70-B4:29000
m29000| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34681 #9 (9 connections now open)
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:27 [conn] connected connection!
m30999| Thu Jun 14 01:43:27 [conn] creating WriteBackListener for: domU-12-31-39-01-70-B4:29000 serverID: 4fd979ff60d677edb238246f
m30999| Thu Jun 14 01:43:27 [conn] initializing shard connection to domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "domU-12-31-39-01-70-B4:29000", serverID: ObjectId('4fd979ff60d677edb238246f'), authoritative: true }
m30999| Thu Jun 14 01:43:27 BackgroundJob starting: WriteBackListener-domU-12-31-39-01-70-B4:29000
m30999| Thu Jun 14 01:43:27 [WriteBackListener-domU-12-31-39-01-70-B4:29000] domU-12-31-39-01-70-B4:29000 is not a shard node
m30999| Thu Jun 14 01:43:27 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:43:27 [conn] best shard for new allocation is shard: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102 mapped: 112 writeLock: 0
m30999| Thu Jun 14 01:43:27 [conn] put [foo] on: test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] initializing over 1 shards required by [unsharded @ test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102]
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] initializing on shard test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102, current connection state is { state: { conn: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", vinfo: "test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] finishing on shard test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102, current connection state is { state: { conn: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", vinfo: "test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:27 [conn] [pcursor] finished on shard test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102, current connection state is { state: { conn: "(done)", vinfo: "test-rs0:test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
null
{
"_id" : "test-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "domU-12-31-39-01-70-B4:31100"
},
{
"_id" : 1,
"host" : "domU-12-31-39-01-70-B4:31101"
},
{
"_id" : 2,
"host" : "domU-12-31-39-01-70-B4:31102"
}
]
}
----
Reconfiguring replica set...
----
m31100| Thu Jun 14 01:43:27 [conn2] replSet replSetReconfig config object parses ok, 2 members specified
m31100| Thu Jun 14 01:43:27 [conn2] replSet replSetReconfig [2]
m31100| Thu Jun 14 01:43:27 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Thu Jun 14 01:43:27 [conn2] replSet saveConfigLocally done
m31100| Thu Jun 14 01:43:27 [conn2] replSet relinquishing primary state
m31100| Thu Jun 14 01:43:27 [conn2] replSet SECONDARY
m31100| Thu Jun 14 01:43:27 [conn2] replSet closing client sockets after relinquishing primary
m31100| Thu Jun 14 01:43:27 [conn1] end connection 10.255.119.66:48282 (14 connections now open)
m31101| Thu Jun 14 01:43:27 [conn5] end connection 10.255.119.66:34637 (7 connections now open)
Thu Jun 14 01:43:27 DBClientCursor::init call() failed
Thu Jun 14 01:43:27 query failed : admin.$cmd { replSetReconfig: { _id: "test-rs0", version: 2.0, members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" } ] } } to: 127.0.0.1:31100
m30999| Thu Jun 14 01:43:27 [WriteBackListener-domU-12-31-39-01-70-B4:31100] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [0] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:43:27 [WriteBackListener-domU-12-31-39-01-70-B4:31100] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:43:27 [WriteBackListener-domU-12-31-39-01-70-B4:31100] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd979ff60d677edb238246f') }
m30999| Thu Jun 14 01:43:27 [WriteBackListener-domU-12-31-39-01-70-B4:31100] WriteBackListener exception : DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd979ff60d677edb238246f') }
{
"message" : "error doing query: failed",
"fileName" : "src/mongo/shell/collection.js",
"lineNumber" : 155,
"stack" : "find(\"admin.$cmd\",[object Object],undefined,-1,0,0,4)@:0\n([object Object])@src/mongo/shell/collection.js:155\n([object Object])@src/mongo/shell/db.js:49\n@/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_no_replica_set_refresh.js:23\n",
"name" : "Error"
}
Thu Jun 14 01:43:27 trying reconnect to 127.0.0.1:31100
Thu Jun 14 01:43:27 reconnect 127.0.0.1:31100 ok
m31102| Thu Jun 14 01:43:27 [conn3] end connection 10.255.119.66:54553 (7 connections now open)
m31102| Thu Jun 14 01:43:27 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:27 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
{
"setName" : "test-rs0",
"ismaster" : true,
"secondary" : false,
"hosts" : [
"domU-12-31-39-01-70-B4:31100",
"domU-12-31-39-01-70-B4:31101"
],
"primary" : "domU-12-31-39-01-70-B4:31100",
"me" : "domU-12-31-39-01-70-B4:31100",
"maxBsonObjectSize" : 16777216,
"localTime" : ISODate("2012-06-14T05:43:27.696Z"),
"ok" : 1
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31101| Thu Jun 14 01:43:27 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:27 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:34683 #10 (8 connections now open)
m31100| Thu Jun 14 01:43:27 [conn17] SocketException handling request, closing client connection: 9001 socket exception [2] server [10.255.119.66:48319]
m31100| Thu Jun 14 01:43:27 [conn2] replSet PRIMARY
m31100| Thu Jun 14 01:43:27 [conn2] replSet replSetReconfig new config saved locally
m31100| Thu Jun 14 01:43:27 [conn2] command admin.$cmd command: { replSetReconfig: { _id: "test-rs0", version: 2.0, members: [ { _id: 0.0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1.0, host: "domU-12-31-39-01-70-B4:31101" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1673976 r:109 w:635646 reslen:37 139ms
m31100| Thu Jun 14 01:43:27 [conn2] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:60034]
m31100| Thu Jun 14 01:43:27 [conn18] SocketException handling request, closing client connection: 9001 socket exception [2] server [10.255.119.66:48321]
m31100| Thu Jun 14 01:43:27 [conn20] SocketException handling request, closing client connection: 9001 socket exception [2] server [10.255.119.66:48330]
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 127.0.0.1:60094 #26 (12 connections now open)
m31100| Thu Jun 14 01:43:27 [conn19] SocketException handling request, closing client connection: 9001 socket exception [2] server [10.255.119.66:48329]
m31100| Thu Jun 14 01:43:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31100| Thu Jun 14 01:43:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31100| Thu Jun 14 01:43:27 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m31100| Thu Jun 14 01:43:27 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
m31100| Thu Jun 14 01:43:27 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31101| Thu Jun 14 01:43:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31101| Thu Jun 14 01:43:27 [rsMgr] replset msgReceivedNewConfig version: version: 2
m31101| Thu Jun 14 01:43:27 [rsMgr] replSet info saving a newer config version to local.system.replset
m31101| Thu Jun 14 01:43:27 [rsMgr] replSet saveConfigLocally done
m31101| Thu Jun 14 01:43:27 [rsMgr] replSet replSetReconfig new config saved locally
m31100| Thu Jun 14 01:43:27 [conn5] end connection 10.255.119.66:48297 (9 connections now open)
m31100| Thu Jun 14 01:43:27 [initandlisten] connection accepted from 10.255.119.66:48344 #27 (10 connections now open)
m31101| Thu Jun 14 01:43:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31101| Thu Jun 14 01:43:27 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:43:27 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
m29000| Thu Jun 14 01:43:27 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.759 secs
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31102| Thu Jun 14 01:43:28 [rsMgr] replset msgReceivedNewConfig version: version: 2
m31102| Thu Jun 14 01:43:28 [rsMgr] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:43:28 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31102| Thu Jun 14 01:43:28 [rsMgr] replSet saveConfigLocally done
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31102] SocketException: remote: 10.255.119.66:31102 error: 9001 socket exception [0] server [10.255.119.66:31102]
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31102] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31102] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31102 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd979ff60d677edb238246f') }
m31100| Thu Jun 14 01:43:28 [conn6] end connection 10.255.119.66:48298 (9 connections now open)
m31101| Thu Jun 14 01:43:28 [conn4] end connection 10.255.119.66:34636 (7 connections now open)
m31102| Thu Jun 14 01:43:28 [conn1] end connection 10.255.119.66:54550 (6 connections now open)
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31102] WriteBackListener exception : DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31102 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd979ff60d677edb238246f') }
m31102| Thu Jun 14 01:43:28 [rsMgr] replSet REMOVED
m31102| Thu Jun 14 01:43:28 [rsMgr] replSet info self not present in the repl set configuration:
m31102| Thu Jun 14 01:43:28 [rsMgr] { _id: "test-rs0", version: 2, members: [ { _id: 0, host: "domU-12-31-39-01-70-B4:31100" }, { _id: 1, host: "domU-12-31-39-01-70-B4:31101" } ] }
m31102| Thu Jun 14 01:43:28 [rsMgr] trying to contact domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:28 [rsMgr] trying to contact domU-12-31-39-01-70-B4:31101
m31102| Thu Jun 14 01:43:28 [rsMgr] replSet info Couldn't load config yet. Sleeping 20sec and will try again.
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31100] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:28 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:28 [WriteBackListener-domU-12-31-39-01-70-B4:31100] connected connection!
m31100| Thu Jun 14 01:43:28 [initandlisten] connection accepted from 10.255.119.66:48345 #28 (10 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30999| Thu Jun 14 01:43:29 [WriteBackListener-domU-12-31-39-01-70-B4:31102] creating new connection to:domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:29 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:29 [WriteBackListener-domU-12-31-39-01-70-B4:31102] connected connection!
m31102| Thu Jun 14 01:43:29 [initandlisten] connection accepted from 10.255.119.66:54608 #9 (7 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31101| Thu Jun 14 01:43:29 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
m31101| Thu Jun 14 01:43:29 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31102| Thu Jun 14 01:43:30 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
m31102| Thu Jun 14 01:43:30 [rsHealthPoll] ERROR: Client::shutdown not called: rsHealthPoll
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.255.119.66:31100
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [1] server [10.255.119.66:31100]
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31100| Thu Jun 14 01:43:36 [conn14] end connection 10.255.119.66:48306 (9 connections now open)
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 ok
m31100| Thu Jun 14 01:43:36 [initandlisten] connection accepted from 10.255.119.66:48347 #29 (10 connections now open)
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] erasing host { addr: "domU-12-31-39-01-70-B4:31102", isMaster: false, secondary: false, hidden: false, ok: false } from replica set test-rs0
m31100| Thu Jun 14 01:43:36 [conn15] end connection 10.255.119.66:48309 (9 connections now open)
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.255.119.66:31100
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [1] server [10.255.119.66:31100]
Thu Jun 14 01:43:36 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31100| Thu Jun 14 01:43:36 [initandlisten] connection accepted from 10.255.119.66:48348 #30 (10 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30999| Thu Jun 14 01:43:37 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:43:37 [Balancer] skipping balancing round because balancing is disabled
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30998| Thu Jun 14 01:43:37 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:43:37 [Balancer] skipping balancing round because balancing is disabled
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31100| Thu Jun 14 01:43:37 [conn22] end connection 10.255.119.66:48333 (9 connections now open)
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] checking replica set: test-rs0
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.255.119.66:31100
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] SocketException: remote: 10.255.119.66:31100 error: 9001 socket exception [1] server [10.255.119.66:31100]
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { ismaster: 1 }
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception domU-12-31-39-01-70-B4:31100 DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { ismaster: 1 }
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] _check : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] trying reconnect to domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] reconnect domU-12-31-39-01-70-B4:31100 ok
m31100| Thu Jun 14 01:43:37 [initandlisten] connection accepted from 10.255.119.66:48349 #31 (10 connections now open)
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652617545), ok: 1.0 }
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31101" } from test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] erasing host { addr: "domU-12-31-39-01-70-B4:31102", isMaster: false, secondary: true, hidden: false, ok: true } from replica set test-rs0
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] creating new connection to:domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] connected connection!
m31102| Thu Jun 14 01:43:37 [conn7] end connection 10.255.119.66:54597 (6 connections now open)
m31100| Thu Jun 14 01:43:37 [initandlisten] connection accepted from 10.255.119.66:48350 #32 (11 connections now open)
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652617546), ok: 1.0 }
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31100" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339652617546), ok: 1.0 }
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] creating new connection to:domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] connected connection!
m31101| Thu Jun 14 01:43:37 [initandlisten] connection accepted from 10.255.119.66:34692 #11 (8 connections now open)
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:37 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
----
Mongos successfully detected change...
----
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{
"_id" : "test-rs0",
"host" : "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101"
}
----
Now test adding new replica set servers...
----
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:37 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
[
{
"_id" : "test-rs0",
"host" : "test-rs0/domU-12-31-39-01-70-B4:31100"
}
]
m31100| Thu Jun 14 01:43:37 [conn26] replSet replSetReconfig config object parses ok, 3 members specified
m31100| Thu Jun 14 01:43:37 [conn26] replSet cmufcc requestHeartbeat domU-12-31-39-01-70-B4:31102 : 9001 socket exception [2] server [10.255.119.66:31102]
m31100| Thu Jun 14 01:43:37 [conn26] replSet replSetReconfig [2]
m31100| Thu Jun 14 01:43:37 [conn26] replSet info saving a newer config version to local.system.replset
{ "down" : [ "domU-12-31-39-01-70-B4:31102" ], "ok" : 1 }
{
"setName" : "test-rs0",
"ismaster" : true,
"secondary" : false,
"hosts" : [
"domU-12-31-39-01-70-B4:31100",
"domU-12-31-39-01-70-B4:31102",
"domU-12-31-39-01-70-B4:31101"
],
"primary" : "domU-12-31-39-01-70-B4:31100",
"me" : "domU-12-31-39-01-70-B4:31100",
"maxBsonObjectSize" : 16777216,
"localTime" : ISODate("2012-06-14T05:43:37.696Z"),
"ok" : 1
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31100| Thu Jun 14 01:43:37 [conn26] replSet saveConfigLocally done
m31100| Thu Jun 14 01:43:37 [conn26] replSet info : additive change to configuration
m31100| Thu Jun 14 01:43:37 [conn26] replSet replSetReconfig new config saved locally
m31102| Thu Jun 14 01:43:37 [initandlisten] connection accepted from 10.255.119.66:54614 #10 (7 connections now open)
m31100| Thu Jun 14 01:43:37 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
m31101| Thu Jun 14 01:43:37 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:37 [initandlisten] connection accepted from 10.255.119.66:48353 #33 (12 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31101| Thu Jun 14 01:43:37 [rsMgr] replset msgReceivedNewConfig version: version: 3
m31101| Thu Jun 14 01:43:37 [rsMgr] replSet info saving a newer config version to local.system.replset
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31101| Thu Jun 14 01:43:38 [rsMgr] replSet saveConfigLocally done
m31101| Thu Jun 14 01:43:38 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:38 [rsMgr] replSet info : additive change to configuration
m31101| Thu Jun 14 01:43:38 [rsMgr] replSet replSetReconfig new config saved locally
m31102| Thu Jun 14 01:43:38 [conn4] end connection 10.255.119.66:54556 (6 connections now open)
m31101| Thu Jun 14 01:43:38 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:43:38 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31102 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31102 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 3, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:43:38 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state DOWN
m31100| Thu Jun 14 01:43:38 [initandlisten] connection accepted from 10.255.119.66:48354 #34 (13 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31100| Thu Jun 14 01:43:39 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Thu Jun 14 01:43:39 [slaveTracking] build index done. scanned 0 total records. 0 secs
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m31102| Thu Jun 14 01:43:40 [initandlisten] connection accepted from 10.255.119.66:54617 #11 (7 connections now open)
m31101| Thu Jun 14 01:43:40 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is up
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
Thu Jun 14 01:43:46 [ReplicaSetMonitorWatcher] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
Thu Jun 14 01:43:46 [ReplicaSetMonitorWatcher] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
Thu Jun 14 01:43:46 [ReplicaSetMonitorWatcher] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m31102| Thu Jun 14 01:43:46 [initandlisten] connection accepted from 10.255.119.66:54618 #12 (8 connections now open)
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] checking replica set: test-rs0
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652627548), ok: 1.0 }
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] changing hosts to { 0: "domU-12-31-39-01-70-B4:31100", 1: "domU-12-31-39-01-70-B4:31102", 2: "domU-12-31-39-01-70-B4:31101" } from test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] trying to add new host domU-12-31-39-01-70-B4:31102 to replica set test-rs0
m30999| Thu Jun 14 01:43:47 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] successfully connected to new host domU-12-31-39-01-70-B4:31102 in replica set test-rs0
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] _check : test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:43:47 [initandlisten] connection accepted from 10.255.119.66:54619 #13 (9 connections now open)
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652627549), ok: 1.0 }
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "domU-12-31-39-01-70-B4:31100", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31101" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31100", maxBsonObjectSize: 16777216, localTime: new Date(1339652627550), ok: 1.0 }
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: domU-12-31-39-01-70-B4:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "domU-12-31-39-01-70-B4:31101", "domU-12-31-39-01-70-B4:31102", "domU-12-31-39-01-70-B4:31100" ], primary: "domU-12-31-39-01-70-B4:31100", me: "domU-12-31-39-01-70-B4:31101", maxBsonObjectSize: 16777216, localTime: new Date(1339652627550), ok: 1.0 }
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true domU-12-31-39-01-70-B4:31100
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true domU-12-31-39-01-70-B4:31101
m30999| Thu Jun 14 01:43:47 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = false domU-12-31-39-01-70-B4:31102
{
"hosts" : [
{
"addr" : "domU-12-31-39-01-70-B4:31100",
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31101",
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"addr" : "domU-12-31-39-01-70-B4:31102",
"ok" : false,
"ismaster" : false,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 0
}
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] initializing over 1 shards required by [unsharded @ config:domU-12-31-39-01-70-B4:29000]
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] initializing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] initialized query (lazily) on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] finishing over 1 shards
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] finishing on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "domU-12-31-39-01-70-B4:29000", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30999| Thu Jun 14 01:43:47 [conn] [pcursor] finished on shard config:domU-12-31-39-01-70-B4:29000, current connection state is { state: { conn: "(done)", vinfo: "config:domU-12-31-39-01-70-B4:29000", cursor: { _id: "test-rs0", host: "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
{
"_id" : "test-rs0",
"host" : "test-rs0/domU-12-31-39-01-70-B4:31100,domU-12-31-39-01-70-B4:31101,domU-12-31-39-01-70-B4:31102"
}
----
Done...
----
m30999| Thu Jun 14 01:43:47 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:43:47 [conn3] end connection 10.255.119.66:34657 (8 connections now open)
m29000| Thu Jun 14 01:43:47 [conn5] end connection 10.255.119.66:34663 (7 connections now open)
m29000| Thu Jun 14 01:43:47 [conn4] end connection 10.255.119.66:34662 (6 connections now open)
m29000| Thu Jun 14 01:43:47 [conn9] end connection 10.255.119.66:34681 (5 connections now open)
m31100| Thu Jun 14 01:43:47 [conn31] end connection 10.255.119.66:48349 (12 connections now open)
m31100| Thu Jun 14 01:43:47 [conn25] end connection 10.255.119.66:48340 (11 connections now open)
m31100| Thu Jun 14 01:43:47 [conn32] end connection 10.255.119.66:48350 (10 connections now open)
m31100| Thu Jun 14 01:43:47 [conn24] end connection 10.255.119.66:48339 (9 connections now open)
m31101| Thu Jun 14 01:43:47 [conn8] end connection 10.255.119.66:34675 (7 connections now open)
m31101| Thu Jun 14 01:43:47 [conn11] end connection 10.255.119.66:34692 (6 connections now open)
m31102| Thu Jun 14 01:43:48 [initandlisten] connection accepted from 10.255.119.66:54620 #14 (10 connections now open)
m31102| Thu Jun 14 01:43:48 [rsMgr] trying to contact domU-12-31-39-01-70-B4:31100
m31100| Thu Jun 14 01:43:48 [initandlisten] connection accepted from 10.255.119.66:48359 #35 (10 connections now open)
m31102| Thu Jun 14 01:43:48 [rsMgr] trying to contact domU-12-31-39-01-70-B4:31101
m31101| Thu Jun 14 01:43:48 [initandlisten] connection accepted from 10.255.119.66:34701 #12 (7 connections now open)
m31101| Thu Jun 14 01:43:48 [conn12] end connection 10.255.119.66:34701 (6 connections now open)
Thu Jun 14 01:43:48 [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.255.119.66:31102
Thu Jun 14 01:43:48 [ReplicaSetMonitorWatcher] SocketException: remote: 10.255.119.66:31102 error: 9001 socket exception [1] server [10.255.119.66:31102]
Thu Jun 14 01:43:48 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:43:48 [rsMgr] replSet I am domU-12-31-39-01-70-B4:31102
m31102| Thu Jun 14 01:43:48 [rsMgr] replSet got config version 3 from a remote, saving locally
m31102| Thu Jun 14 01:43:48 [rsMgr] replSet info saving a newer config version to local.system.replset
m31102| Thu Jun 14 01:43:48 [rsMgr] replSet saveConfigLocally done
m31102| Thu Jun 14 01:43:48 [rsMgr] replset msgReceivedNewConfig version: version: 2
m31102| Thu Jun 14 01:43:48 [rsSync] replSet SECONDARY
m31102| Thu Jun 14 01:43:48 [rsBackgroundSync] replSet syncing to: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:48 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 3
m31102| Thu Jun 14 01:43:48 [conn12] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:323 1649ms
m31102| Thu Jun 14 01:43:48 [conn13] command admin.$cmd command: { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:323 601ms
m31102| Thu Jun 14 01:43:48 [conn6] end connection 10.255.119.66:54573 (9 connections now open)
m31102| Thu Jun 14 01:43:48 [conn13] end connection 10.255.119.66:54619 (9 connections now open)
m31102| Thu Jun 14 01:43:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is up
m31102| Thu Jun 14 01:43:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31101 is now in state SECONDARY
m31102| Thu Jun 14 01:43:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is up
m31102| Thu Jun 14 01:43:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state PRIMARY
m31101| Thu Jun 14 01:43:48 [initandlisten] connection accepted from 10.255.119.66:34702 #13 (7 connections now open)
m31100| Thu Jun 14 01:43:48 [initandlisten] connection accepted from 10.255.119.66:48362 #36 (11 connections now open)
Thu Jun 14 01:43:48 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:43:48 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:43:48 [conn7] end connection 10.255.119.66:34667 (4 connections now open)
m29000| Thu Jun 14 01:43:48 [conn8] end connection 10.255.119.66:34668 (3 connections now open)
m29000| Thu Jun 14 01:43:48 [conn6] end connection 10.255.119.66:34666 (2 connections now open)
m31101| Thu Jun 14 01:43:48 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31102 is now in state SECONDARY
m31102| Thu Jun 14 01:43:49 [rsSyncNotifier] replset setting oplog notifier to domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:49 [rsSyncNotifier] Socket flush send() errno:9 Bad file descriptor 10.255.119.66:31100
m31102| Thu Jun 14 01:43:49 [rsSyncNotifier] caught exception (socket exception) in destructor (~PiggyBackData)
m31100| Thu Jun 14 01:43:49 [initandlisten] connection accepted from 10.255.119.66:48363 #37 (12 connections now open)
Thu Jun 14 01:43:49 shell: stopped mongo program on port 30998
Thu Jun 14 01:43:49 No db started on port: 30000
Thu Jun 14 01:43:49 shell: stopped mongo program on port 30000
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Thu Jun 14 01:43:49 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Thu Jun 14 01:43:49 [interruptThread] now exiting
m31100| Thu Jun 14 01:43:49 dbexit:
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: going to close listening sockets...
m31100| Thu Jun 14 01:43:49 [interruptThread] closing listening socket: 43
m31100| Thu Jun 14 01:43:49 [interruptThread] closing listening socket: 44
m31100| Thu Jun 14 01:43:49 [interruptThread] closing listening socket: 46
m31100| Thu Jun 14 01:43:49 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: going to flush diaglog...
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: going to close sockets...
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: closing all files...
m31100| Thu Jun 14 01:43:49 [interruptThread] closeAllFiles() finished
m31100| Thu Jun 14 01:43:49 [interruptThread] shutdown: removing fs lock...
m31100| Thu Jun 14 01:43:49 dbexit: really exiting now
m31101| Thu Jun 14 01:43:49 [conn10] end connection 10.255.119.66:34683 (6 connections now open)
m31102| Thu Jun 14 01:43:49 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:49 [conn10] end connection 10.255.119.66:54614 (7 connections now open)
m31101| Thu Jun 14 01:43:49 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31102| Thu Jun 14 01:43:49 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: domU-12-31-39-01-70-B4:31100
m31101| Thu Jun 14 01:43:50 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Thu Jun 14 01:43:50 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 3, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31101" }
m31101| Thu Jun 14 01:43:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31101| Thu Jun 14 01:43:50 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31102 would veto
m31102| Thu Jun 14 01:43:50 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Thu Jun 14 01:43:50 [rsHealthPoll] replSet info domU-12-31-39-01-70-B4:31100 is down (or slow to respond): DBClientBase::findN: transport error: domU-12-31-39-01-70-B4:31100 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", v: 3, pv: 1, checkEmpty: false, from: "domU-12-31-39-01-70-B4:31102" }
m31102| Thu Jun 14 01:43:50 [rsHealthPoll] replSet member domU-12-31-39-01-70-B4:31100 is now in state DOWN
m31102| Thu Jun 14 01:43:50 [rsMgr] not electing self, domU-12-31-39-01-70-B4:31101 would veto
Thu Jun 14 01:43:50 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Thu Jun 14 01:43:50 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Thu Jun 14 01:43:50 [interruptThread] now exiting
m31101| Thu Jun 14 01:43:50 dbexit:
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: going to close listening sockets...
m31101| Thu Jun 14 01:43:50 [interruptThread] closing listening socket: 47
m31101| Thu Jun 14 01:43:50 [interruptThread] closing listening socket: 48
m31101| Thu Jun 14 01:43:50 [interruptThread] closing listening socket: 49
m31101| Thu Jun 14 01:43:50 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: going to flush diaglog...
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: going to close sockets...
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: closing all files...
m31101| Thu Jun 14 01:43:50 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:43:50 [conn11] end connection 10.255.119.66:54617 (6 connections now open)
m31101| Thu Jun 14 01:43:50 [interruptThread] shutdown: removing fs lock...
m31101| Thu Jun 14 01:43:50 dbexit: really exiting now
Thu Jun 14 01:43:51 shell: stopped mongo program on port 31101
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
ReplSetTest stop *** Shutting down mongod in port 31102 ***
m31102| Thu Jun 14 01:43:51 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Thu Jun 14 01:43:51 [interruptThread] now exiting
m31102| Thu Jun 14 01:43:51 dbexit:
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: going to close listening sockets...
m31102| Thu Jun 14 01:43:51 [interruptThread] closing listening socket: 50
m31102| Thu Jun 14 01:43:51 [interruptThread] closing listening socket: 51
m31102| Thu Jun 14 01:43:51 [interruptThread] closing listening socket: 52
m31102| Thu Jun 14 01:43:51 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: going to flush diaglog...
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: going to close sockets...
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: closing all files...
m31102| Thu Jun 14 01:43:51 [interruptThread] closeAllFiles() finished
m31102| Thu Jun 14 01:43:51 [interruptThread] shutdown: removing fs lock...
m31102| Thu Jun 14 01:43:51 dbexit: really exiting now
Thu Jun 14 01:43:52 shell: stopped mongo program on port 31102
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
m29000| Thu Jun 14 01:43:52 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:43:52 [interruptThread] now exiting
m29000| Thu Jun 14 01:43:52 dbexit:
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:43:52 [interruptThread] closing listening socket: 59
m29000| Thu Jun 14 01:43:52 [interruptThread] closing listening socket: 60
m29000| Thu Jun 14 01:43:52 [interruptThread] closing listening socket: 61
m29000| Thu Jun 14 01:43:52 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:43:52 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:43:52 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:43:52 dbexit: really exiting now
Thu Jun 14 01:43:53 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 58.189 seconds ***
58272.694111ms
Thu Jun 14 01:43:53 [conn15] end connection 127.0.0.1:59266 (33 connections now open)
Thu Jun 14 01:43:53 [conn24] end connection 127.0.0.1:54782 (33 connections now open)
Thu Jun 14 01:43:53 [conn16] end connection 127.0.0.1:59331 (32 connections now open)
Thu Jun 14 01:43:53 [conn17] end connection 127.0.0.1:59394 (30 connections now open)
Thu Jun 14 01:43:53 [conn18] end connection 127.0.0.1:54598 (29 connections now open)
Thu Jun 14 01:43:53 [conn19] end connection 127.0.0.1:54627 (28 connections now open)
Thu Jun 14 01:43:53 [conn20] end connection 127.0.0.1:54692 (27 connections now open)
Thu Jun 14 01:43:53 [conn21] end connection 127.0.0.1:54715 (26 connections now open)
Thu Jun 14 01:43:53 [conn23] end connection 127.0.0.1:54755 (25 connections now open)
Thu Jun 14 01:43:53 [conn22] end connection 127.0.0.1:54734 (24 connections now open)
Thu Jun 14 01:43:53 [conn25] end connection 127.0.0.1:54803 (24 connections now open)
Thu Jun 14 01:43:53 [conn26] end connection 127.0.0.1:54825 (22 connections now open)
Thu Jun 14 01:43:53 [conn27] end connection 127.0.0.1:54851 (21 connections now open)
Thu Jun 14 01:43:53 [conn28] end connection 127.0.0.1:54875 (20 connections now open)
Thu Jun 14 01:43:53 [conn29] end connection 127.0.0.1:54899 (19 connections now open)
Thu Jun 14 01:43:53 [conn30] end connection 127.0.0.1:54921 (18 connections now open)
Thu Jun 14 01:43:53 [conn31] end connection 127.0.0.1:54953 (17 connections now open)
Thu Jun 14 01:43:53 [conn33] end connection 127.0.0.1:34952 (17 connections now open)
Thu Jun 14 01:43:53 [conn32] end connection 127.0.0.1:54990 (15 connections now open)
Thu Jun 14 01:43:53 [conn34] end connection 127.0.0.1:35000 (14 connections now open)
Thu Jun 14 01:43:53 [conn35] end connection 127.0.0.1:35043 (13 connections now open)
Thu Jun 14 01:43:53 [conn36] end connection 127.0.0.1:35071 (13 connections now open)
Thu Jun 14 01:43:53 [conn37] end connection 127.0.0.1:35091 (11 connections now open)
Thu Jun 14 01:43:53 [conn38] end connection 127.0.0.1:35132 (10 connections now open)
Thu Jun 14 01:43:53 [conn40] end connection 127.0.0.1:35179 (9 connections now open)
Thu Jun 14 01:43:53 [conn43] end connection 127.0.0.1:34742 (8 connections now open)
Thu Jun 14 01:43:53 [conn44] end connection 127.0.0.1:34774 (7 connections now open)
Thu Jun 14 01:43:53 [conn46] end connection 127.0.0.1:34821 (6 connections now open)
Thu Jun 14 01:43:53 [conn39] end connection 127.0.0.1:35155 (5 connections now open)
Thu Jun 14 01:43:53 [conn42] end connection 127.0.0.1:35226 (4 connections now open)
Thu Jun 14 01:43:53 [conn45] end connection 127.0.0.1:34799 (3 connections now open)
Thu Jun 14 01:43:53 [conn47] end connection 127.0.0.1:34847 (2 connections now open)
Thu Jun 14 01:43:53 [initandlisten] connection accepted from 127.0.0.1:34931 #48 (3 connections now open)
Thu Jun 14 01:43:53 [conn41] end connection 127.0.0.1:35202 (2 connections now open)
*******************************************
Test : mongos_validate_writes.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_validate_writes.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mongos_validate_writes.js";TestData.testFile = "mongos_validate_writes.js";TestData.testName = "mongos_validate_writes";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:43:53 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:43:53 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:43:53
m30000| Thu Jun 14 01:43:53 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:43:53
m30000| Thu Jun 14 01:43:53 [initandlisten] MongoDB starting : pid=27256 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:43:53 [initandlisten]
m30000| Thu Jun 14 01:43:53 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:43:53 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:43:53 [initandlisten]
m30000| Thu Jun 14 01:43:53 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:43:53 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:43:53 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:43:53 [initandlisten]
m30000| Thu Jun 14 01:43:53 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:43:53 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:43:53 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:43:53 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:43:53 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:43:53 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:43:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:43:54 [initandlisten] connection accepted from 127.0.0.1:60277 #1 (1 connection now open)
m30001| Thu Jun 14 01:43:54
m30001| Thu Jun 14 01:43:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:43:54
m30001| Thu Jun 14 01:43:54 [initandlisten] MongoDB starting : pid=27269 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:43:54 [initandlisten]
m30001| Thu Jun 14 01:43:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:43:54 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:43:54 [initandlisten]
m30001| Thu Jun 14 01:43:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:43:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:43:54 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:43:54 [initandlisten]
m30001| Thu Jun 14 01:43:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:43:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:43:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:43:54 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:43:54 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:43:54 [websvr] admin web console waiting for connections on port 31001
Resetting db path '/data/db/test-config0'
Thu Jun 14 01:43:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m30001| Thu Jun 14 01:43:54 [initandlisten] connection accepted from 127.0.0.1:48854 #1 (1 connection now open)
m29000| Thu Jun 14 01:43:54
m29000| Thu Jun 14 01:43:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Thu Jun 14 01:43:54
m29000| Thu Jun 14 01:43:54 [initandlisten] MongoDB starting : pid=27282 port=29000 dbpath=/data/db/test-config0 32-bit host=domU-12-31-39-01-70-B4
m29000| Thu Jun 14 01:43:54 [initandlisten]
m29000| Thu Jun 14 01:43:54 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Thu Jun 14 01:43:54 [initandlisten] ** Not recommended for production.
m29000| Thu Jun 14 01:43:54 [initandlisten]
m29000| Thu Jun 14 01:43:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Thu Jun 14 01:43:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Thu Jun 14 01:43:54 [initandlisten] ** with --journal, the limit is lower
m29000| Thu Jun 14 01:43:54 [initandlisten]
m29000| Thu Jun 14 01:43:54 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Thu Jun 14 01:43:54 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Thu Jun 14 01:43:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m29000| Thu Jun 14 01:43:54 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Thu Jun 14 01:43:54 [initandlisten] waiting for connections on port 29000
m29000| Thu Jun 14 01:43:54 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Thu Jun 14 01:43:54 [websvr] ERROR: addr already in use
"localhost:29000"
m29000| Thu Jun 14 01:43:54 [initandlisten] connection accepted from 127.0.0.1:44402 #1 (1 connection now open)
m29000| Thu Jun 14 01:43:54 [initandlisten] connection accepted from 127.0.0.1:44403 #2 (2 connections now open)
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:43:54 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:29000
m29000| Thu Jun 14 01:43:54 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Thu Jun 14 01:43:54 [FileAllocator] creating directory /data/db/test-config0/_tmp
m29000| Thu Jun 14 01:43:54 [initandlisten] connection accepted from 127.0.0.1:44405 #3 (3 connections now open)
m30999| Thu Jun 14 01:43:54 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:43:54 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27297 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:43:54 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:43:54 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:43:54 [mongosMain] options: { configdb: "localhost:29000", port: 30999 }
m29000| Thu Jun 14 01:43:54 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.26 secs
m29000| Thu Jun 14 01:43:54 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Thu Jun 14 01:43:55 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.299 secs
m29000| Thu Jun 14 01:43:55 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Thu Jun 14 01:43:55 [conn2] build index config.settings { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn2] insert config.settings keyUpdates:0 locks(micros) w:571658 571ms
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44408 #4 (4 connections now open)
m29000| Thu Jun 14 01:43:55 [conn4] build index config.version { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] build index config.chunks { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] info: creating collection config.chunks on add index
m29000| Thu Jun 14 01:43:55 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] build index config.shards { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [conn3] info: creating collection config.shards on add index
m29000| Thu Jun 14 01:43:55 [conn3] build index config.shards { host: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:55 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:43:55 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:43:55 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:43:55 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:43:55
m30999| Thu Jun 14 01:43:55 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:43:55 [conn4] build index config.mongos { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44409 #5 (5 connections now open)
m30999| Thu Jun 14 01:43:55 [mongosMain] waiting for connections on port 30999
m29000| Thu Jun 14 01:43:55 [conn5] build index config.locks { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:55 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30999:1339652635:1804289383 (sleeping for 30000ms)
m29000| Thu Jun 14 01:43:55 [conn3] build index config.lockpings { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' acquired, ts : 4fd97a1b591abdbaaebc7691
m30999| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' unlocked.
m29000| Thu Jun 14 01:43:55 [conn3] build index config.lockpings { ping: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:43:55 [mongosMain] connection accepted from 127.0.0.1:54351 #1 (1 connection now open)
Thu Jun 14 01:43:55 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:29000
m30998| Thu Jun 14 01:43:55 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:43:55 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27316 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:43:55 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:43:55 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:43:55 [mongosMain] options: { configdb: "localhost:29000", port: 30998 }
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44412 #6 (6 connections now open)
m30998| Thu Jun 14 01:43:55 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:43:55 [Balancer] about to contact config servers and shards
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44413 #7 (7 connections now open)
m30998| Thu Jun 14 01:43:55 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:43:55 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:43:55
m30998| Thu Jun 14 01:43:55 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44414 #8 (8 connections now open)
m30998| Thu Jun 14 01:43:55 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652635:1804289383' acquired, ts : 4fd97a1b632292824afda1ee
m30998| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652635:1804289383' unlocked.
m30998| Thu Jun 14 01:43:55 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30998:1339652635:1804289383 (sleeping for 30000ms)
Thu Jun 14 01:43:55 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30997 --configdb localhost:29000
m30998| Thu Jun 14 01:43:55 [mongosMain] connection accepted from 127.0.0.1:42173 #1 (1 connection now open)
m30997| Thu Jun 14 01:43:55 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30997| Thu Jun 14 01:43:55 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27332 port=30997 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30997| Thu Jun 14 01:43:55 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30997| Thu Jun 14 01:43:55 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30997| Thu Jun 14 01:43:55 [mongosMain] options: { configdb: "localhost:29000", port: 30997 }
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44417 #9 (9 connections now open)
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44418 #10 (10 connections now open)
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44419 #11 (11 connections now open)
m30997| Thu Jun 14 01:43:55 [websvr] admin web console waiting for connections on port 31997
m30997| Thu Jun 14 01:43:55 [mongosMain] waiting for connections on port 30997
m30997| Thu Jun 14 01:43:55 [Balancer] about to contact config servers and shards
m30997| Thu Jun 14 01:43:55 [Balancer] config servers and shards contacted successfully
m30997| Thu Jun 14 01:43:55 [Balancer] balancer id: domU-12-31-39-01-70-B4:30997 started at Jun 14 01:43:55
m30997| Thu Jun 14 01:43:55 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44420 #12 (12 connections now open)
m30997| Thu Jun 14 01:43:55 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30997:1339652635:1804289383 (sleeping for 30000ms)
m30997| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339652635:1804289383' acquired, ts : 4fd97a1b3fa2ba75ec315064
m30997| Thu Jun 14 01:43:55 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30997:1339652635:1804289383' unlocked.
m30997| Thu Jun 14 01:43:55 [mongosMain] connection accepted from 127.0.0.1:60611 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:43:55 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:43:55 [conn] put [admin] on: config:localhost:29000
m29000| Thu Jun 14 01:43:55 [conn3] build index config.databases { _id: 1 }
m29000| Thu Jun 14 01:43:55 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:60301 #2 (2 connections now open)
m30999| Thu Jun 14 01:43:55 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:48877 #2 (2 connections now open)
m30999| Thu Jun 14 01:43:55 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:60303 #3 (3 connections now open)
m30999| Thu Jun 14 01:43:55 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a1b591abdbaaebc7690
m30999| Thu Jun 14 01:43:55 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a1b591abdbaaebc7690
m30001| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:48879 #3 (3 connections now open)
m29000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:44426 #13 (13 connections now open)
m30999| Thu Jun 14 01:43:55 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd97a1b591abdbaaebc7690
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
{ "was" : 0, "ok" : 1 }
{ "was" : 0, "ok" : 1 }
{ "was" : 0, "ok" : 1 }
{ "was" : 0, "ok" : 1 }
{ "was" : 0, "ok" : 1 }
m30999| Thu Jun 14 01:43:55 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:43:55 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:43:55 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:60306 #4 (4 connections now open)
m30999| Thu Jun 14 01:43:55 [conn] connected connection!
m30000| Thu Jun 14 01:43:55 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:43:55 [conn4] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:43:55 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:10 reslen:1550 0ms
m30000| Thu Jun 14 01:43:55 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:43:55 [conn4] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:43:55 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:20 reslen:1550 0ms
m30999| Thu Jun 14 01:43:55 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:43:55 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:55 [conn] connected connection!
m30001| Thu Jun 14 01:43:55 [initandlisten] connection accepted from 127.0.0.1:48882 #4 (4 connections now open)
m30001| Thu Jun 14 01:43:55 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:43:55 [conn4] run command admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:43:55 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:10 reslen:1550 0ms
m30999| Thu Jun 14 01:43:55 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:43:55 [conn] put [foo] on: shard0000:localhost:30000
m30999| Thu Jun 14 01:43:55 [conn] enabling sharding on: foo
{ "ok" : 1 }
m30999| Thu Jun 14 01:43:55 [conn] sharded index write for foo.system.indexes
m30000| Thu Jun 14 01:43:55 [conn3] opening db: foo
m30000| Thu Jun 14 01:43:55 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:43:55 [FileAllocator] creating directory /data/db/test0/_tmp
m30000| Thu Jun 14 01:43:55 [FileAllocator] flushing directory /data/db/test0
m29000| Thu Jun 14 01:43:55 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.66 secs
m30000| Thu Jun 14 01:43:55 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:43:56 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.379 secs
m30000| Thu Jun 14 01:43:56 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:43:56 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:43:56 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.282 secs
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.system.indexes size 4608 0
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.system.indexes
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.system.namespaces size 2048 0
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.system.namespaces
m30000| Thu Jun 14 01:43:56 [conn3] create collection foo.bar {}
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar size 8192 0
m30000| Thu Jun 14 01:43:56 [conn3] adding _id index for collection foo.bar
m30000| Thu Jun 14 01:43:56 [conn3] build index foo.bar { _id: 1 }
m30000| mem info: before index start vsize: 143 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652636.0/
m30000| mem info: before final sort vsize: 143 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 143 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar.$_id_ size 36864 0
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar.$_id_
m30000| Thu Jun 14 01:43:56 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:56 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar
m30000| Thu Jun 14 01:43:56 [conn3] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:43:56 [conn3] build index foo.bar { a: 1.0 }
m30000| mem info: before index start vsize: 143 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652636.1/
m30000| mem info: before final sort vsize: 143 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 143 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar.$a_1 size 36864 0
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar.$a_1
m30000| Thu Jun 14 01:43:56 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:56 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] insert foo.system.indexes keyUpdates:0 locks(micros) W:67 w:847657 847ms
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:67 w:847657 reslen:67 0ms
m30000| Thu Jun 14 01:43:56 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:73 reslen:67 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:56 [conn4] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) r:268 nreturned:2 reslen:145 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.system.namespaces { name: "foo.bar" }
m30000| Thu Jun 14 01:43:56 [conn4] query foo.system.namespaces query: { name: "foo.bar" } ntoreturn:1 keyUpdates:0 locks(micros) r:324 nreturned:1 reslen:43 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { a: 1.0 } }
m30000| Thu Jun 14 01:43:56 [conn4] run command admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { a: 1.0 } }
m30000| Thu Jun 14 01:43:56 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.bar", keyPattern: { a: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) r:376 reslen:37 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:56 [conn4] run command foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:56 [conn4] command foo.$cmd command: { count: "bar", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:411 reslen:48 0ms
m30999| Thu Jun 14 01:43:56 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { a: 1.0 } }
m30999| Thu Jun 14 01:43:56 [conn] enable sharding on: foo.bar with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:43:56 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd97a1c591abdbaaebc7692
m30000| Thu Jun 14 01:43:56 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) r:411 w:43 0ms
m30999| Thu Jun 14 01:43:56 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7692
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd97a1c591abdbaaebc7692 based on: (empty)
m30999| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 0 current: 2 version: 1|0||4fd97a1c591abdbaaebc7692 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:77 w:847657 reslen:171 0ms
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] trying to set shard version of 1|0||4fd97a1c591abdbaaebc7692 for 'foo.bar'
m30000| Thu Jun 14 01:43:56 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:43:56 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a1c591abdbaaebc7692 for 'foo.bar'
m30000| Thu Jun 14 01:43:56 [conn3] creating new connection to:localhost:29000
m30000| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 0 current: 2 version: 1|0||4fd97a1c591abdbaaebc7692 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m29000| Thu Jun 14 01:43:56 [conn3] build index config.collections { _id: 1 }
m29000| Thu Jun 14 01:43:56 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:44429 #14 (14 connections now open)
m30000| Thu Jun 14 01:43:56 [conn3] connected connection!
m30000| Thu Jun 14 01:43:56 [conn3] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7692
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:85 w:847657 reslen:86 1ms
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:43:56 [conn] resetting shard version of foo.bar on localhost:30001, version is zero
m30999| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" } 0x8dedd50
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:86 reslen:86 0ms
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60309 #5 (5 connections now open)
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn5] entering shard mode for connection
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:86 0ms
m30001| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:48885 #5 (5 connections now open)
m30001| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn5] entering shard mode for connection
m30001| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30001| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:86 0ms
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called foo.bar {}
m30000| Thu Jun 14 01:43:56 [conn5] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) r:40 nreturned:0 reslen:20 0ms
m30001| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:48886 #6 (6 connections now open)
m30001| Thu Jun 14 01:43:56 [conn6] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30001| Thu Jun 14 01:43:56 [conn6] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30001| Thu Jun 14 01:43:56 [conn6] command: { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60312 #6 (6 connections now open)
m30000| Thu Jun 14 01:43:56 [conn6] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:43:56 [conn6] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:43:56 [conn6] command: { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60313 #7 (7 connections now open)
m30000| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn7] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30000| Thu Jun 14 01:43:56 [conn7] entering shard mode for connection
m30000| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30000| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:86 0ms
m30001| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:48889 #7 (7 connections now open)
m30001| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn7] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30001| Thu Jun 14 01:43:56 [conn7] entering shard mode for connection
m30001| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30001| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:3 reslen:86 0ms
m30000| Thu Jun 14 01:43:56 [conn7] runQuery called foo.bar {}
m30000| Thu Jun 14 01:43:56 [conn7] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) r:35 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60315 #8 (8 connections now open)
m30000| Thu Jun 14 01:43:56 [conn8] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:43:56 [conn8] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:43:56 [conn8] command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30999| Thu Jun 14 01:43:56 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:43:56 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:56-0", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652636505), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:56 [conn] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:56 [conn] inserting initial doc in config.locks for lock foo.bar
m30999| Thu Jun 14 01:43:56 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:43:56 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd97a1c591abdbaaebc7693" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0 }
m30001| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:48891 #8 (8 connections now open)
m30999| Thu Jun 14 01:43:56 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' acquired, ts : 4fd97a1c591abdbaaebc7693
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8df13c0
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:43:56 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:56-1", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652636508), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:56 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' unlocked.
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:56 [conn4] run command foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:56 [conn4] CMD: drop foo.bar
m30000| Thu Jun 14 01:43:56 [conn4] dropCollection: foo.bar
m30000| Thu Jun 14 01:43:56 [conn4] create collection foo.$freelist {}
m30000| Thu Jun 14 01:43:56 [conn4] allocExtent foo.$freelist size 8192 0
m30000| Thu Jun 14 01:43:56 [conn4] dropIndexes done
m30000| Thu Jun 14 01:43:56 [conn4] command foo.$cmd command: { drop: "bar" } ntoreturn:1 keyUpdates:0 locks(micros) r:411 w:799 reslen:114 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn4] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn4] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn4] entering shard mode for connection
m30000| Thu Jun 14 01:43:56 [conn4] wiping data for: foo.bar
m30000| Thu Jun 14 01:43:56 [conn4] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:411 w:799 reslen:135 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:56 [conn4] run command admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:56 [conn4] command: { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:56 [conn4] command admin.$cmd command: { unsetSharding: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:411 w:799 reslen:37 0ms
m30999| Thu Jun 14 01:43:56 [conn] sharded index write for foo.system.indexes
m30000| Thu Jun 14 01:43:56 [conn3] create collection foo.bar {}
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar size 8192 1
m30000| Thu Jun 14 01:43:56 [conn3] adding _id index for collection foo.bar
m30000| Thu Jun 14 01:43:56 [conn3] build index foo.bar { _id: 1 }
m30001| Thu Jun 14 01:43:56 [conn8] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30001| Thu Jun 14 01:43:56 [conn8] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30001| Thu Jun 14 01:43:56 [conn8] command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| mem info: before index start vsize: 157 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652636.2/
m30997| Thu Jun 14 01:43:56 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:43:56 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7692
m30997| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd97a1c591abdbaaebc7692 based on: (empty)
m30997| Thu Jun 14 01:43:56 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] initializing over 1 shards required by [foo.bar @ 1|0||4fd97a1c591abdbaaebc7692]
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30997| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:43:56 [conn] connected connection!
m30997| Thu Jun 14 01:43:56 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a1b3fa2ba75ec315063
m30997| Thu Jun 14 01:43:56 [conn] initializing shard connection to localhost:30000
m30997| Thu Jun 14 01:43:56 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30997| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 0 current: 2 version: 1|0||4fd97a1c591abdbaaebc7692 manager: 0xa2b1e40
m30997| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } 0xa2b3050
m30997| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: WriteBackListener-localhost:30000
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:43:56 [conn] connected connection!
m30997| Thu Jun 14 01:43:56 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a1b3fa2ba75ec315063
m30997| Thu Jun 14 01:43:56 [conn] initializing shard connection to localhost:30001
m30997| Thu Jun 14 01:43:56 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true }
m30997| Thu Jun 14 01:43:56 [conn] resetting shard version of foo.bar on localhost:30001, version is zero
m30997| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0xa2b1e40
m30997| Thu Jun 14 01:43:56 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0001", shardHost: "localhost:30001" } 0xa2b35f0
m30997| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] finishing over 1 shards
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30997| Thu Jun 14 01:43:56 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] connected connection!
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: WriteBackListener-localhost:30001
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30001] connected connection!
m30000| mem info: before final sort vsize: 157 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 157 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar.$_id_ size 36864 1
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar.$_id_
m30000| Thu Jun 14 01:43:56 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:56 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar
m30000| Thu Jun 14 01:43:56 [conn3] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:43:56 [conn3] build index foo.bar { b: 1.0 }
m30000| mem info: before index start vsize: 157 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652636.3/
m30998| Thu Jun 14 01:43:56 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:43:56 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7692
m30998| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd97a1c591abdbaaebc7692 based on: (empty)
m30998| Thu Jun 14 01:43:56 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] initializing over 1 shards required by [foo.bar @ 1|0||4fd97a1c591abdbaaebc7692]
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:43:56 [conn] connected connection!
m30998| Thu Jun 14 01:43:56 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a1b632292824afda1ed
m30998| Thu Jun 14 01:43:56 [conn] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:43:56 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30998| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 0 current: 2 version: 1|0||4fd97a1c591abdbaaebc7692 manager: 0x9b19c00
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: WriteBackListener-localhost:30000
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:43:56 [conn] connected connection!
m30998| Thu Jun 14 01:43:56 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a1b632292824afda1ed
m30998| Thu Jun 14 01:43:56 [conn] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:43:56 [conn] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30998| Thu Jun 14 01:43:56 [conn] resetting shard version of foo.bar on localhost:30001, version is zero
m30998| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x9b19c00
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } 0xb2d00570
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] finishing over 1 shards
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:43:56 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 1|0||4fd97a1c591abdbaaebc7692", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: WriteBackListener-localhost:30001
m30998| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30001] connected connection!
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] connected connection!
m30000| mem info: before final sort vsize: 157 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 157 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:56 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] allocExtent foo.bar.$b_1 size 36864 1
m30000| Thu Jun 14 01:43:56 [conn3] New namespace: foo.bar.$b_1
m30000| Thu Jun 14 01:43:56 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:56 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:56 [conn3] insert foo.system.indexes keyUpdates:0 locks(micros) W:85 w:849413 1ms
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:85 w:849413 reslen:67 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:56 [conn4] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) W:20 r:501 w:799 nreturned:2 reslen:145 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.system.namespaces { name: "foo.bar" }
m30000| Thu Jun 14 01:43:56 [conn4] query foo.system.namespaces query: { name: "foo.bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:549 w:799 nreturned:1 reslen:43 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { b: 1.0 } }
m30000| Thu Jun 14 01:43:56 [conn4] run command admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { b: 1.0 } }
m30000| Thu Jun 14 01:43:56 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.bar", keyPattern: { b: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:575 w:799 reslen:37 0ms
m30000| Thu Jun 14 01:43:56 [conn4] runQuery called foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:56 [conn4] run command foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:56 [conn4] command foo.$cmd command: { count: "bar", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:592 w:799 reslen:48 0ms
m30000| Thu Jun 14 01:43:56 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:20 r:592 w:816 0ms
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:93 w:849413 reslen:171 0ms
m30000| Thu Jun 14 01:43:56 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn3] trying to set shard version of 1|0||4fd97a1c591abdbaaebc7694 for 'foo.bar'
m30000| Thu Jun 14 01:43:56 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:43:56 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a1c591abdbaaebc7694 for 'foo.bar'
m30999| Thu Jun 14 01:43:56 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { b: 1.0 } }
m30999| Thu Jun 14 01:43:56 [conn] enable sharding on: foo.bar with shard key: { b: 1.0 }
m30999| Thu Jun 14 01:43:56 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd97a1c591abdbaaebc7694
m30999| Thu Jun 14 01:43:56 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7694
m30999| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|0||4fd97a1c591abdbaaebc7694 based on: (empty)
m30999| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 3 version: 1|0||4fd97a1c591abdbaaebc7694 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 3 version: 1|0||4fd97a1c591abdbaaebc7694 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn3] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7694
m30000| Thu Jun 14 01:43:56 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:99 w:849413 reslen:86 0ms
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30998| Thu Jun 14 01:43:56 [conn] warning: shard key mismatch for insert { _id: ObjectId('4fd97a1c06b725597f257fd9'), b: "b" }, expected values for { a: 1.0 }, reloading config data to ensure not stale
m30998| Thu Jun 14 01:43:56 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1c591abdbaaebc7692 and 1 chunks
m30998| Thu Jun 14 01:43:56 [conn] warning: got invalid chunk version 1|0||4fd97a1c591abdbaaebc7694 in document { _id: "foo.bar-b_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), ns: "foo.bar", min: { b: MinKey }, max: { b: MaxKey }, shard: "shard0000" } when trying to load differing chunks at version 1|0||4fd97a1c591abdbaaebc7692
m30998| Thu Jun 14 01:43:56 [conn] warning: major change in chunk information found when reloading foo.bar, previous version was 1|0||4fd97a1c591abdbaaebc7692
m30998| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 0|0||000000000000000000000000 based on: 1|0||4fd97a1c591abdbaaebc7692
m30998| Thu Jun 14 01:43:56 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:43:56 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7694
m30998| Thu Jun 14 01:43:56 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|0||4fd97a1c591abdbaaebc7694 based on: (empty)
m30998| Thu Jun 14 01:43:56 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30998| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 4 version: 1|0||4fd97a1c591abdbaaebc7694 manager: 0xb2d018c8
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 reslen:163 0ms
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:43:56 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 4 version: 1|0||4fd97a1c591abdbaaebc7694 manager: 0xb2d018c8
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 reslen:86 0ms
m30998| Thu Jun 14 01:43:56 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ok: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn5] insert foo.bar keyUpdates:0 locks(micros) r:40 w:102 0ms
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:102 reslen:67 0ms
m30001| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:67 0ms
m30998| Thu Jun 14 01:43:56 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { b: MinKey } max: { b: MaxKey } dataWritten: 7396464 splitThreshold: 921
m30998| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60317 #9 (9 connections now open)
m30998| Thu Jun 14 01:43:56 [conn] connected connection!
m30998| Thu Jun 14 01:43:56 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:43:56 [conn9] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { b: 1.0 }, min: { b: MinKey }, max: { b: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:56 [conn9] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { b: 1.0 }, min: { b: MinKey }, max: { b: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:56 [conn9] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { b: 1.0 }, min: { b: MinKey }, max: { b: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:27 reslen:53 0ms
m30000| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:102 reslen:67 0ms
m30001| Thu Jun 14 01:43:56 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn5] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:67 0ms
m30000| Thu Jun 14 01:43:56 [conn7] connection sharding metadata does not match for collection foo.bar, will retry (wanted : 1|0||4fd97a1c591abdbaaebc7694, received : 1|0||4fd97a1c591abdbaaebc7692) (queuing writeback)
m30000| Thu Jun 14 01:43:56 [conn7] writeback queued for op: insert len: 59 ns: foo.bar{ _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }
m30000| Thu Jun 14 01:43:56 [conn7] writing back msg with len: 59 op: 2002
m30000| Thu Jun 14 01:43:56 [conn7] insert foo.bar keyUpdates:0 locks(micros) r:35 w:87 0ms
m30000| Thu Jun 14 01:43:56 [conn8] WriteBackCommand got : { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a1c0000000000000000'), connectionId: 7, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), msg: BinData }
m30000| Thu Jun 14 01:43:56 [conn8] command admin.$cmd command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') } ntoreturn:1 keyUpdates:0 reslen:325 17ms
m30000| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:35 w:87 reslen:138 0ms
m30001| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:3 reslen:67 0ms
m30997| Thu Jun 14 01:43:56 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey } dataWritten: 3971208 splitThreshold: 921
m30997| Thu Jun 14 01:43:56 [conn] creating new connection to:localhost:30000
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a1c0000000000000000'), connectionId: 7, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:7 writebackId: 4fd97a1c0000000000000000 needVersion : 1|0||4fd97a1c591abdbaaebc7694 mine : 1|0||4fd97a1c591abdbaaebc7692
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] op: insert len: 59 ns: foo.bar{ _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 1|0||4fd97a1c591abdbaaebc7694 but currently have version 1|0||4fd97a1c591abdbaaebc7692
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1c591abdbaaebc7694
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|0||4fd97a1c591abdbaaebc7694 based on: (empty)
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database foo
m30997| Thu Jun 14 01:43:56 [WriteBackListener-localhost:30000] warning: shard key mismatch for insert { _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }, expected values for { b: 1.0 }, reloading config data to ensure not stale
m30997| Thu Jun 14 01:43:56 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:43:56 [initandlisten] connection accepted from 127.0.0.1:60318 #10 (10 connections now open)
m30997| Thu Jun 14 01:43:56 [conn] connected connection!
m30997| Thu Jun 14 01:43:56 [conn] User Assertion: 13345:splitVector command failed: { errmsg: "couldn't find index over splitting key", ok: 0.0 }
m30997| Thu Jun 14 01:43:56 [conn] warning: could not autosplit collection foo.bar :: caused by :: 13345 splitVector command failed: { errmsg: "couldn't find index over splitting key", ok: 0.0 }
m30000| Thu Jun 14 01:43:56 [conn10] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:56 [conn10] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:56 [conn10] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:10 reslen:88 0ms
m30000| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) r:35 w:87 reslen:138 0ms
m30001| Thu Jun 14 01:43:56 [conn7] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn7] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:56 [conn7] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:3 reslen:67 0ms
m30000| Thu Jun 14 01:43:56 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:43:57 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.653 secs
m30997| Thu Jun 14 01:43:57 [WriteBackListener-localhost:30000] tried to insert object with no valid shard key for { b: 1.0 } : { _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }
m30997| Thu Jun 14 01:43:57 [WriteBackListener-localhost:30000] User Assertion: 8011:tried to insert object with no valid shard key for { b: 1.0 } : { _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }
m30997| Thu Jun 14 01:43:57 [WriteBackListener-localhost:30000] ERROR: error processing writeback: 8011 tried to insert object with no valid shard key for { b: 1.0 } : { _id: ObjectId('4fd97a1c06b725597f257fda'), a: "a" }
m30000| Thu Jun 14 01:43:57 [conn10] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:43:57 [conn10] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:43:57 [conn10] command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30999| Thu Jun 14 01:43:57 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:43:57 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:57-2", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652637541), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:57 [conn] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:57 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:43:57 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd97a1d591abdbaaebc7695" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a1c591abdbaaebc7693" } }
m30999| Thu Jun 14 01:43:57 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' acquired, ts : 4fd97a1d591abdbaaebc7695
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8df13c0
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:43:57 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:57-3", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652637543), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:57 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' unlocked.
m30999| Thu Jun 14 01:43:57 [conn] sharded index write for foo.system.indexes
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:57 [conn4] run command foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:57 [conn4] CMD: drop foo.bar
m30000| Thu Jun 14 01:43:57 [conn4] dropCollection: foo.bar
m30000| Thu Jun 14 01:43:57 [conn4] dropIndexes done
m30000| Thu Jun 14 01:43:57 [conn4] command foo.$cmd command: { drop: "bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:592 w:1333 reslen:114 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn4] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn4] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn4] entering shard mode for connection
m30000| Thu Jun 14 01:43:57 [conn4] wiping data for: foo.bar
m30000| Thu Jun 14 01:43:57 [conn4] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:592 w:1333 reslen:135 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:57 [conn4] run command admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:57 [conn4] command: { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:57 [conn4] command admin.$cmd command: { unsetSharding: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:592 w:1333 reslen:37 0ms
m30000| Thu Jun 14 01:43:57 [conn3] create collection foo.bar {}
m30000| Thu Jun 14 01:43:57 [conn3] allocExtent foo.bar size 8192 1
m30000| Thu Jun 14 01:43:57 [conn3] adding _id index for collection foo.bar
m30000| Thu Jun 14 01:43:57 [conn3] build index foo.bar { _id: 1 }
m30000| mem info: before index start vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:57 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652637.4/
m30000| mem info: before final sort vsize: 159 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:57 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:57 [conn3] allocExtent foo.bar.$_id_ size 36864 1
m30000| Thu Jun 14 01:43:57 [conn3] New namespace: foo.bar.$_id_
m30000| Thu Jun 14 01:43:57 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:57 [conn3] New namespace: foo.bar
m30000| Thu Jun 14 01:43:57 [conn3] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:43:57 [conn3] build index foo.bar { c: 1.0 }
m30000| mem info: before index start vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:57 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652637.5/
m30000| mem info: before final sort vsize: 159 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:57 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:57 [conn3] allocExtent foo.bar.$c_1 size 36864 1
m30000| Thu Jun 14 01:43:57 [conn3] New namespace: foo.bar.$c_1
m30000| Thu Jun 14 01:43:57 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:57 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:57 [conn3] insert foo.system.indexes keyUpdates:0 locks(micros) W:99 w:851217 1ms
m30000| Thu Jun 14 01:43:57 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:99 w:851217 reslen:67 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:57 [conn4] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) W:40 r:674 w:1333 nreturned:2 reslen:145 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called foo.system.namespaces { name: "foo.bar" }
m30000| Thu Jun 14 01:43:57 [conn4] query foo.system.namespaces query: { name: "foo.bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:722 w:1333 nreturned:1 reslen:43 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { c: 1.0 } }
m30000| Thu Jun 14 01:43:57 [conn4] run command admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { c: 1.0 } }
m30000| Thu Jun 14 01:43:57 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.bar", keyPattern: { c: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:747 w:1333 reslen:37 0ms
m30000| Thu Jun 14 01:43:57 [conn4] runQuery called foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:57 [conn4] run command foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:57 [conn4] command foo.$cmd command: { count: "bar", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:765 w:1333 reslen:48 0ms
m30000| Thu Jun 14 01:43:57 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:40 r:765 w:1350 0ms
m30000| Thu Jun 14 01:43:57 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:113 w:851217 reslen:171 0ms
m30999| Thu Jun 14 01:43:57 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { c: 1.0 } }
m30999| Thu Jun 14 01:43:57 [conn] enable sharding on: foo.bar with shard key: { c: 1.0 }
m30999| Thu Jun 14 01:43:57 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd97a1d591abdbaaebc7696
m30999| Thu Jun 14 01:43:57 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1d591abdbaaebc7696
m30999| Thu Jun 14 01:43:57 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|0||4fd97a1d591abdbaaebc7696 based on: (empty)
m30999| Thu Jun 14 01:43:57 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 3 current: 4 version: 1|0||4fd97a1d591abdbaaebc7696 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:57 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:43:57 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 3 current: 4 version: 1|0||4fd97a1d591abdbaaebc7696 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:57 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn3] trying to set shard version of 1|0||4fd97a1d591abdbaaebc7696 for 'foo.bar'
m30000| Thu Jun 14 01:43:57 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:43:57 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a1d591abdbaaebc7696 for 'foo.bar'
m30000| Thu Jun 14 01:43:57 [conn3] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1d591abdbaaebc7696
m30000| Thu Jun 14 01:43:57 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:119 w:851217 reslen:86 0ms
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30998| Thu Jun 14 01:43:57 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1c591abdbaaebc7694 and 1 chunks
m30998| Thu Jun 14 01:43:57 [conn] warning: got invalid chunk version 1|0||4fd97a1d591abdbaaebc7696 in document { _id: "foo.bar-c_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), ns: "foo.bar", min: { c: MinKey }, max: { c: MaxKey }, shard: "shard0000" } when trying to load differing chunks at version 1|0||4fd97a1c591abdbaaebc7694
m30998| Thu Jun 14 01:43:57 [conn] warning: major change in chunk information found when reloading foo.bar, previous version was 1|0||4fd97a1c591abdbaaebc7694
m30998| Thu Jun 14 01:43:57 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 0|0||000000000000000000000000 based on: 1|0||4fd97a1c591abdbaaebc7694
m30998| Thu Jun 14 01:43:57 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:43:57 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1d591abdbaaebc7696
m30998| Thu Jun 14 01:43:57 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 1|0||4fd97a1d591abdbaaebc7696 based on: (empty)
m30998| Thu Jun 14 01:43:57 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30998| Thu Jun 14 01:43:57 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 4 current: 6 version: 1|0||4fd97a1d591abdbaaebc7696 manager: 0xb2d01740
m30998| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:43:57 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:102 reslen:163 0ms
m30998| Thu Jun 14 01:43:57 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:43:57 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 4 current: 6 version: 1|0||4fd97a1d591abdbaaebc7696 manager: 0xb2d01740
m30998| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:43:57 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:102 reslen:86 0ms
m30998| Thu Jun 14 01:43:57 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), ok: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn5] update foo.bar query: { c: "c" } update: { c: "c" } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:40 w:267 0ms
m30000| Thu Jun 14 01:43:57 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:57 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:57 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:267 reslen:107 0ms
m30000| Thu Jun 14 01:43:57 [conn9] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { c: 1.0 }, min: { c: MinKey }, max: { c: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:57 [conn9] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { c: 1.0 }, min: { c: MinKey }, max: { c: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:57 [conn9] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { c: 1.0 }, min: { c: MinKey }, max: { c: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:51 reslen:53 0ms
m30000| Thu Jun 14 01:43:57 [conn5] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn5] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:57 [conn5] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:267 reslen:107 0ms
m30001| Thu Jun 14 01:43:57 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:57 [conn5] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:57 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:67 0ms
m30998| Thu Jun 14 01:43:57 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { c: MinKey } max: { c: MaxKey } dataWritten: 6853750 splitThreshold: 921
m30998| Thu Jun 14 01:43:57 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:43:57 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 3 version: 1|0||4fd97a1c591abdbaaebc7694 manager: 0xa2b4ce0
m30997| Thu Jun 14 01:43:57 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } 0xa2b3050
m30000| Thu Jun 14 01:43:57 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:57 [conn7] trying to set shard version of 1|0||4fd97a1c591abdbaaebc7694 for 'foo.bar'
m30000| Thu Jun 14 01:43:57 [conn7] verifying cached version 1|0||4fd97a1d591abdbaaebc7696 and new version 1|0||4fd97a1c591abdbaaebc7694 for 'foo.bar'
m30000| Thu Jun 14 01:43:57 [conn7] warning: detected incompatible version epoch in new version 1|0||4fd97a1c591abdbaaebc7694, old version was 1|0||4fd97a1d591abdbaaebc7696
m30000| Thu Jun 14 01:43:57 [conn7] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1d591abdbaaebc7696
m30000| Thu Jun 14 01:43:57 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:10 r:35 w:87 reslen:274 0ms
m30997| Thu Jun 14 01:43:57 [conn] setShardVersion failed!
m30997| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ns: "foo.bar", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1c591abdbaaebc7694'), globalVersion: Timestamp 1000|0, globalVersionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), errmsg: "client version differs from config's for collection 'foo.bar'", ok: 0.0 }
m30997| Thu Jun 14 01:43:57 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1c591abdbaaebc7694 and 1 chunks
m30997| Thu Jun 14 01:43:57 [conn] warning: got invalid chunk version 1|0||4fd97a1d591abdbaaebc7696 in document { _id: "foo.bar-c_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), ns: "foo.bar", min: { c: MinKey }, max: { c: MaxKey }, shard: "shard0000" } when trying to load differing chunks at version 1|0||4fd97a1c591abdbaaebc7694
m30997| Thu Jun 14 01:43:57 [conn] warning: major change in chunk information found when reloading foo.bar, previous version was 1|0||4fd97a1c591abdbaaebc7694
m30997| Thu Jun 14 01:43:57 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 0|0||000000000000000000000000 based on: 1|0||4fd97a1c591abdbaaebc7694
m30997| Thu Jun 14 01:43:57 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:43:57 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1d591abdbaaebc7696
m30997| Thu Jun 14 01:43:57 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 1|0||4fd97a1d591abdbaaebc7696 based on: (empty)
m30997| Thu Jun 14 01:43:57 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30997| Thu Jun 14 01:43:57 [conn] update will be retried b/c sharding config info is stale, retries: 0 ns: foo.bar data: { b: "b" }
m30997| Thu Jun 14 01:43:58 [conn] User Assertion: 12376:full shard key must be in update object for collection: foo.bar
m30997| Thu Jun 14 01:43:58 [conn] AssertionException while processing op type : 2001 to : foo.bar :: caused by :: 12376 full shard key must be in update object for collection: foo.bar
m30999| Thu Jun 14 01:43:58 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:43:58 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:58-4", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652638557), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:58 [conn] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:58 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:43:58 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd97a1e591abdbaaebc7697" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a1d591abdbaaebc7695" } }
m30999| Thu Jun 14 01:43:58 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' acquired, ts : 4fd97a1e591abdbaaebc7697
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8df13c0
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:43:58 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:58-5", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652638559), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:58 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' unlocked.
m30999| Thu Jun 14 01:43:58 [conn] sharded index write for foo.system.indexes
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:58 [conn4] run command foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:58 [conn4] CMD: drop foo.bar
m30000| Thu Jun 14 01:43:58 [conn4] dropCollection: foo.bar
m30000| Thu Jun 14 01:43:58 [conn4] dropIndexes done
m30000| Thu Jun 14 01:43:58 [conn4] command foo.$cmd command: { drop: "bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:765 w:1857 reslen:114 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn4] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn4] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn4] entering shard mode for connection
m30000| Thu Jun 14 01:43:58 [conn4] wiping data for: foo.bar
m30000| Thu Jun 14 01:43:58 [conn4] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:765 w:1857 reslen:135 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:58 [conn4] run command admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:58 [conn4] command: { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:58 [conn4] command admin.$cmd command: { unsetSharding: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:765 w:1857 reslen:37 0ms
m30000| Thu Jun 14 01:43:58 [conn3] create collection foo.bar {}
m30000| Thu Jun 14 01:43:58 [conn3] allocExtent foo.bar size 8192 1
m30000| Thu Jun 14 01:43:58 [conn3] adding _id index for collection foo.bar
m30000| Thu Jun 14 01:43:58 [conn3] build index foo.bar { _id: 1 }
m30000| mem info: before index start vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:58 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652638.6/
m30000| mem info: before final sort vsize: 159 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:58 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:58 [conn3] allocExtent foo.bar.$_id_ size 36864 1
m30000| Thu Jun 14 01:43:58 [conn3] New namespace: foo.bar.$_id_
m30000| Thu Jun 14 01:43:58 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:58 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:58 [conn3] New namespace: foo.bar
m30000| Thu Jun 14 01:43:58 [conn3] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:43:58 [conn3] build index foo.bar { d: 1.0 }
m30000| mem info: before index start vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:58 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652638.7/
m30000| mem info: before final sort vsize: 159 resident: 32 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 32 mapped: 32
m30000| Thu Jun 14 01:43:58 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:58 [conn3] allocExtent foo.bar.$d_1 size 36864 1
m30000| Thu Jun 14 01:43:58 [conn3] New namespace: foo.bar.$d_1
m30000| Thu Jun 14 01:43:58 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:58 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:58 [conn3] insert foo.system.indexes keyUpdates:0 locks(micros) W:119 w:852653 1ms
m30000| Thu Jun 14 01:43:58 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:58 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:58 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:119 w:852653 reslen:67 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:58 [conn4] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) W:58 r:826 w:1857 nreturned:2 reslen:145 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called foo.system.namespaces { name: "foo.bar" }
m30000| Thu Jun 14 01:43:58 [conn4] query foo.system.namespaces query: { name: "foo.bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:873 w:1857 nreturned:1 reslen:43 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { d: 1.0 } }
m30000| Thu Jun 14 01:43:58 [conn4] run command admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { d: 1.0 } }
m30000| Thu Jun 14 01:43:58 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.bar", keyPattern: { d: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:896 w:1857 reslen:37 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:58 [conn4] run command foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:58 [conn4] command foo.$cmd command: { count: "bar", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:915 w:1857 reslen:48 0ms
m30000| Thu Jun 14 01:43:58 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:58 r:915 w:1874 0ms
m30999| Thu Jun 14 01:43:58 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { d: 1.0 } }
m30999| Thu Jun 14 01:43:58 [conn] enable sharding on: foo.bar with shard key: { d: 1.0 }
m30999| Thu Jun 14 01:43:58 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd97a1e591abdbaaebc7698
m30999| Thu Jun 14 01:43:58 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1e591abdbaaebc7698
m30999| Thu Jun 14 01:43:58 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 1|0||4fd97a1e591abdbaaebc7698 based on: (empty)
m30999| Thu Jun 14 01:43:58 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 4 current: 5 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:58 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30000| Thu Jun 14 01:43:58 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:126 w:852653 reslen:171 0ms
m30000| Thu Jun 14 01:43:58 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30000| Thu Jun 14 01:43:58 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn3] trying to set shard version of 1|0||4fd97a1e591abdbaaebc7698 for 'foo.bar'
m30000| Thu Jun 14 01:43:58 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:43:58 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a1e591abdbaaebc7698 for 'foo.bar'
m30000| Thu Jun 14 01:43:58 [conn3] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1e591abdbaaebc7698
m30000| Thu Jun 14 01:43:58 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:132 w:852653 reslen:86 0ms
m30000| Thu Jun 14 01:43:58 [conn3] insert foo.bar keyUpdates:0 locks(micros) W:132 w:852705 0ms
m30000| Thu Jun 14 01:43:58 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:58 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:58 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:132 w:852705 reslen:67 0ms
m30000| Thu Jun 14 01:43:58 [conn4] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:58 [conn4] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:58 [conn4] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:934 w:1874 reslen:53 0ms
m30000| Thu Jun 14 01:43:58 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:267 reslen:163 0ms
m30000| Thu Jun 14 01:43:58 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:267 reslen:86 0ms
m30000| Thu Jun 14 01:43:58 [conn5] update foo.bar query: { d: "d" } update: { $set: { x: "x" } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) r:40 w:505 0ms
m30000| Thu Jun 14 01:43:58 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:58 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:58 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:505 reslen:85 0ms
m30000| Thu Jun 14 01:43:58 [conn9] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30999| Thu Jun 14 01:43:58 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 4 current: 5 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0x8df16e0
m30999| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:58 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:43:58 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { d: MinKey } max: { d: MaxKey } dataWritten: 174594 splitThreshold: 921
m30999| Thu Jun 14 01:43:58 [conn] chunk not full enough to trigger auto-split no split entry
m30997| Thu Jun 14 01:43:58 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 5 version: 1|0||4fd97a1d591abdbaaebc7696 manager: 0xa2b5f20
m30997| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } 0xa2b3050
m30001| Thu Jun 14 01:43:58 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:58 [conn3] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:58 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:86 reslen:67 0ms
m30001| Thu Jun 14 01:43:58 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:58 [conn5] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:58 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:67 0ms
m30000| Thu Jun 14 01:43:58 [conn9] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30000| Thu Jun 14 01:43:58 [conn9] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { d: 1.0 }, min: { d: MinKey }, max: { d: MaxKey }, maxChunkSizeBytes: 1024, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:70 reslen:53 0ms
m30000| Thu Jun 14 01:43:58 [conn5] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:58 [conn5] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:58 [conn5] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) r:40 w:505 reslen:85 0ms
m30000| Thu Jun 14 01:43:58 [conn5] runQuery called foo.bar {}
m30000| Thu Jun 14 01:43:58 [conn5] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) r:74 w:505 nreturned:1 reslen:60 0ms
m30000| Thu Jun 14 01:43:58 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:58 [conn7] trying to set shard version of 1|0||4fd97a1d591abdbaaebc7696 for 'foo.bar'
m30000| Thu Jun 14 01:43:58 [conn7] verifying cached version 1|0||4fd97a1e591abdbaaebc7698 and new version 1|0||4fd97a1d591abdbaaebc7696 for 'foo.bar'
m30000| Thu Jun 14 01:43:58 [conn7] warning: detected incompatible version epoch in new version 1|0||4fd97a1d591abdbaaebc7696, old version was 1|0||4fd97a1e591abdbaaebc7698
m30000| Thu Jun 14 01:43:58 [conn7] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1e591abdbaaebc7698
m30000| Thu Jun 14 01:43:58 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:15 r:35 w:87 reslen:274 0ms
m30998| Thu Jun 14 01:43:58 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1d591abdbaaebc7696 and 1 chunks
m30998| Thu Jun 14 01:43:58 [conn] warning: got invalid chunk version 1|0||4fd97a1e591abdbaaebc7698 in document { _id: "foo.bar-d_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), ns: "foo.bar", min: { d: MinKey }, max: { d: MaxKey }, shard: "shard0000" } when trying to load differing chunks at version 1|0||4fd97a1d591abdbaaebc7696
m30998| Thu Jun 14 01:43:58 [conn] warning: major change in chunk information found when reloading foo.bar, previous version was 1|0||4fd97a1d591abdbaaebc7696
m30998| Thu Jun 14 01:43:58 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 0|0||000000000000000000000000 based on: 1|0||4fd97a1d591abdbaaebc7696
m30998| Thu Jun 14 01:43:58 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:43:58 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1e591abdbaaebc7698
m30998| Thu Jun 14 01:43:58 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 1|0||4fd97a1e591abdbaaebc7698 based on: (empty)
m30998| Thu Jun 14 01:43:58 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30998| Thu Jun 14 01:43:58 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 6 current: 8 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0xb2d03150
m30998| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30998| Thu Jun 14 01:43:58 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:43:58 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 6 current: 8 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0xb2d03150
m30998| Thu Jun 14 01:43:58 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30998| Thu Jun 14 01:43:58 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), ok: 1.0 }
m30998| Thu Jun 14 01:43:58 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { d: MinKey } max: { d: MaxKey } dataWritten: 9314144 splitThreshold: 921
m30998| Thu Jun 14 01:43:58 [conn] chunk not full enough to trigger auto-split no split entry
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] initializing over 1 shards required by [foo.bar @ 1|0||4fd97a1e591abdbaaebc7698]
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] finishing over 1 shards
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:43:58 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: { _id: ObjectId('4fd97a1e06b725597f257fdd'), d: "d", x: "x" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30997| Thu Jun 14 01:43:58 [conn] setShardVersion failed!
m30997| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ns: "foo.bar", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1d591abdbaaebc7696'), globalVersion: Timestamp 1000|0, globalVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), errmsg: "client version differs from config's for collection 'foo.bar'", ok: 0.0 }
m30997| Thu Jun 14 01:43:58 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1d591abdbaaebc7696 and 1 chunks
m30997| Thu Jun 14 01:43:58 [conn] warning: got invalid chunk version 1|0||4fd97a1e591abdbaaebc7698 in document { _id: "foo.bar-d_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), ns: "foo.bar", min: { d: MinKey }, max: { d: MaxKey }, shard: "shard0000" } when trying to load differing chunks at version 1|0||4fd97a1d591abdbaaebc7696
m30997| Thu Jun 14 01:43:58 [conn] warning: major change in chunk information found when reloading foo.bar, previous version was 1|0||4fd97a1d591abdbaaebc7696
m30997| Thu Jun 14 01:43:58 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 0|0||000000000000000000000000 based on: 1|0||4fd97a1d591abdbaaebc7696
m30997| Thu Jun 14 01:43:58 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:43:58 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1e591abdbaaebc7698
m30997| Thu Jun 14 01:43:58 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 1|0||4fd97a1e591abdbaaebc7698 based on: (empty)
m30997| Thu Jun 14 01:43:58 [conn] found 0 dropped collections and 1 sharded collections for database foo
m30997| Thu Jun 14 01:43:58 [conn] update will be retried b/c sharding config info is stale, retries: 0 ns: foo.bar data: { c: "c" }
m30997| Thu Jun 14 01:43:59 [conn] User Assertion: 8013:For non-multi updates, must have _id or full shard key ({ d: 1.0 }) in query
m30997| Thu Jun 14 01:43:59 [conn] AssertionException while processing op type : 2001 to : foo.bar :: caused by :: 8013 For non-multi updates, must have _id or full shard key ({ d: 1.0 }) in query
m30000| Thu Jun 14 01:43:59 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:35 w:87 reslen:163 0ms
m30000| Thu Jun 14 01:43:59 [conn7] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:35 w:87 reslen:86 0ms
m30000| Thu Jun 14 01:43:59 [conn7] runQuery called foo.bar {}
m30000| Thu Jun 14 01:43:59 [conn7] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:78 w:87 nreturned:1 reslen:60 0ms
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] initializing over 1 shards required by [foo.bar @ 1|0||4fd97a1e591abdbaaebc7698]
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30997| Thu Jun 14 01:43:59 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 7 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0xa2b6318
m30997| Thu Jun 14 01:43:59 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), shard: "shard0000", shardHost: "localhost:30000" } 0xa2b3050
m30997| Thu Jun 14 01:43:59 [conn] setShardVersion failed!
m30997| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30997| Thu Jun 14 01:43:59 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 2 current: 7 version: 1|0||4fd97a1e591abdbaaebc7698 manager: 0xa2b6318
m30997| Thu Jun 14 01:43:59 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), serverID: ObjectId('4fd97a1b3fa2ba75ec315063'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xa2b3050
m30997| Thu Jun 14 01:43:59 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1c591abdbaaebc7692'), ok: 1.0 }
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] needed to set remote version on connection to value compatible with [foo.bar @ 1|0||4fd97a1e591abdbaaebc7698]
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] finishing over 1 shards
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30997| Thu Jun 14 01:43:59 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 1|0||4fd97a1e591abdbaaebc7698", cursor: { _id: ObjectId('4fd97a1e06b725597f257fdd'), d: "d", x: "x" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30999| Thu Jun 14 01:43:59 [conn] DROP: foo.bar
m30999| Thu Jun 14 01:43:59 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:59-6", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652639579), what: "dropCollection.start", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:59 [conn] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:59 [conn] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383:conn:424238335",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652635:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:43:59 2012" },
m30999| "why" : "drop",
m30999| "ts" : { "$oid" : "4fd97a1f591abdbaaebc7699" } }
m30999| { "_id" : "foo.bar",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a1e591abdbaaebc7697" } }
m30999| Thu Jun 14 01:43:59 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' acquired, ts : 4fd97a1f591abdbaaebc7699
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager::drop : foo.bar
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager::drop : foo.bar all locked
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager::drop : foo.bar removed shard data
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager::drop : foo.bar removed chunk data
m30999| Thu Jun 14 01:43:59 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8df13c0
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager::drop : foo.bar DONE
m30999| Thu Jun 14 01:43:59 [conn] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:59-7", server: "domU-12-31-39-01-70-B4", clientAddr: "N/A", time: new Date(1339652639581), what: "dropCollection", ns: "foo.bar", details: {} }
m30999| Thu Jun 14 01:43:59 [conn] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30999:1339652635:1804289383' unlocked.
m30999| Thu Jun 14 01:43:59 [conn] sharded index write for foo.system.indexes
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:59 [conn4] run command foo.$cmd { drop: "bar" }
m30000| Thu Jun 14 01:43:59 [conn4] CMD: drop foo.bar
m30000| Thu Jun 14 01:43:59 [conn4] dropCollection: foo.bar
m30000| Thu Jun 14 01:43:59 [conn4] dropIndexes done
m30000| Thu Jun 14 01:43:59 [conn4] command foo.$cmd command: { drop: "bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:58 r:934 w:2358 reslen:114 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn4] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn4] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn4] entering shard mode for connection
m30000| Thu Jun 14 01:43:59 [conn4] wiping data for: foo.bar
m30000| Thu Jun 14 01:43:59 [conn4] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:934 w:2358 reslen:135 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:59 [conn4] run command admin.$cmd { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:59 [conn4] command: { unsetSharding: 1 }
m30000| Thu Jun 14 01:43:59 [conn4] command admin.$cmd command: { unsetSharding: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:934 w:2358 reslen:37 0ms
{ "ok" : 0, "errmsg" : "it is already the primary" }
m30000| Thu Jun 14 01:43:59 [conn3] create collection foo.bar {}
m30000| Thu Jun 14 01:43:59 [conn3] allocExtent foo.bar size 8192 1
m30000| Thu Jun 14 01:43:59 [conn3] adding _id index for collection foo.bar
m30000| Thu Jun 14 01:43:59 [conn3] build index foo.bar { _id: 1 }
m30000| mem info: before index start vsize: 159 resident: 33 mapped: 32
m30000| Thu Jun 14 01:43:59 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652639.8/
m30000| mem info: before final sort vsize: 159 resident: 33 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 33 mapped: 32
m30000| Thu Jun 14 01:43:59 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:59 [conn3] allocExtent foo.bar.$_id_ size 36864 1
m30000| Thu Jun 14 01:43:59 [conn3] New namespace: foo.bar.$_id_
m30000| Thu Jun 14 01:43:59 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:59 [conn3] New namespace: foo.bar
m30000| Thu Jun 14 01:43:59 [conn3] info: creating collection foo.bar on add index
m30000| Thu Jun 14 01:43:59 [conn3] build index foo.bar { e: 1.0 }
m30000| mem info: before index start vsize: 159 resident: 33 mapped: 32
m30000| Thu Jun 14 01:43:59 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652639.9/
m30000| mem info: before final sort vsize: 159 resident: 33 mapped: 32
m30000| mem info: after final sort vsize: 159 resident: 33 mapped: 32
m30000| Thu Jun 14 01:43:59 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:43:59 [conn3] allocExtent foo.bar.$e_1 size 36864 1
m30000| Thu Jun 14 01:43:59 [conn3] New namespace: foo.bar.$e_1
m30000| Thu Jun 14 01:43:59 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:43:59 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:43:59 [conn3] insert foo.system.indexes keyUpdates:0 locks(micros) W:132 w:854138 1ms
m30000| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:132 w:854138 reslen:67 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:59 [conn4] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) W:76 r:995 w:2358 nreturned:2 reslen:145 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.system.namespaces { name: "foo.bar" }
m30000| Thu Jun 14 01:43:59 [conn4] query foo.system.namespaces query: { name: "foo.bar" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:1044 w:2358 nreturned:1 reslen:43 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { e: 1.0 } }
m30000| Thu Jun 14 01:43:59 [conn4] run command admin.$cmd { checkShardingIndex: "foo.bar", keyPattern: { e: 1.0 } }
m30000| Thu Jun 14 01:43:59 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.bar", keyPattern: { e: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:1066 w:2358 reslen:37 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:59 [conn4] run command foo.$cmd { count: "bar", query: {} }
m30000| Thu Jun 14 01:43:59 [conn4] command foo.$cmd command: { count: "bar", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:1084 w:2358 reslen:48 0ms
m30000| Thu Jun 14 01:43:59 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) W:76 r:1084 w:2375 0ms
m30999| Thu Jun 14 01:43:59 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30999| Thu Jun 14 01:43:59 [conn] found 1 dropped collections and 0 sharded collections for database foo
m30999| Thu Jun 14 01:43:59 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Thu Jun 14 01:43:59 [conn] found 0 dropped collections and 0 sharded collections for database admin
m30999| Thu Jun 14 01:43:59 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { e: 1.0 } }
m30999| Thu Jun 14 01:43:59 [conn] enable sharding on: foo.bar with shard key: { e: 1.0 }
m30999| Thu Jun 14 01:43:59 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd97a1f591abdbaaebc769a
m30999| Thu Jun 14 01:43:59 [conn] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1f591abdbaaebc769a
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 6 version: 1|0||4fd97a1f591abdbaaebc769a based on: (empty)
m30999| Thu Jun 14 01:43:59 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 5 current: 6 version: 1|0||4fd97a1f591abdbaaebc769a manager: 0x8df16e0
m30999| Thu Jun 14 01:43:59 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:59 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30000| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:139 w:854138 reslen:171 0ms
m30000| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30000| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:43:59 [conn3] trying to set shard version of 1|0||4fd97a1f591abdbaaebc769a for 'foo.bar'
m30000| Thu Jun 14 01:43:59 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:43:59 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a1f591abdbaaebc769a for 'foo.bar'
m30000| Thu Jun 14 01:43:59 [conn3] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1f591abdbaaebc769a
m30000| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:146 w:854138 reslen:86 0ms
m30999| Thu Jun 14 01:43:59 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 5 current: 6 version: 1|0||4fd97a1f591abdbaaebc769a manager: 0x8df16e0
m30999| Thu Jun 14 01:43:59 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ded7b8
m30999| Thu Jun 14 01:43:59 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:146 w:854138 reslen:67 0ms
m30001| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:86 reslen:67 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called admin.$cmd { splitChunk: "foo.bar", keyPattern: { e: 1.0 }, min: { e: MinKey }, max: { e: MaxKey }, from: "shard0000", splitKeys: [ { e: 0.0 } ], shardId: "foo.bar-e_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] run command admin.$cmd { splitChunk: "foo.bar", keyPattern: { e: 1.0 }, min: { e: MinKey }, max: { e: MaxKey }, from: "shard0000", splitKeys: [ { e: 0.0 } ], shardId: "foo.bar-e_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] command: { splitChunk: "foo.bar", keyPattern: { e: 1.0 }, min: { e: MinKey }, max: { e: MaxKey }, from: "shard0000", splitKeys: [ { e: 0.0 } ], shardId: "foo.bar-e_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] creating new connection to:localhost:29000
m30000| Thu Jun 14 01:43:59 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:43:59 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { e: MinKey } max: { e: MaxKey }
m29000| Thu Jun 14 01:43:59 [initandlisten] connection accepted from 127.0.0.1:44440 #15 (15 connections now open)
m30000| Thu Jun 14 01:43:59 [conn4] connected connection!
m30000| Thu Jun 14 01:43:59 [conn4] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { e: 1.0 }, min: { e: MinKey }, max: { e: MaxKey }, from: "shard0000", splitKeys: [ { e: 0.0 } ], shardId: "foo.bar-e_MinKey", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:43:59 [conn4] skew from remote server localhost:29000 found: 0
m30000| Thu Jun 14 01:43:59 [conn4] skew from remote server localhost:29000 found: 0
m30000| Thu Jun 14 01:43:59 [conn4] skew from remote server localhost:29000 found: 0
m30000| Thu Jun 14 01:43:59 [conn4] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
m30000| Thu Jun 14 01:43:59 [LockPinger] creating distributed lock ping thread for localhost:29000 and process domU-12-31-39-01-70-B4:30000:1339652639:521848788 (sleeping for 30000ms)
m30000| Thu Jun 14 01:43:59 [conn4] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788:
m30000| { "state" : 1,
m30000| "who" : "domU-12-31-39-01-70-B4:30000:1339652639:521848788:conn4:1986361736",
m30000| "process" : "domU-12-31-39-01-70-B4:30000:1339652639:521848788",
m30000| "when" : { "$date" : "Thu Jun 14 01:43:59 2012" },
m30000| "why" : "split-{ e: MinKey }",
m30000| "ts" : { "$oid" : "4fd97a1f894a957359464458" } }
m30000| { "_id" : "foo.bar",
m30000| "state" : 0,
m30000| "ts" : { "$oid" : "4fd97a1f591abdbaaebc7699" } }
m30000| Thu Jun 14 01:43:59 [LockPinger] cluster localhost:29000 pinged successfully at Thu Jun 14 01:43:59 2012 by distributed lock pinger 'localhost:29000/domU-12-31-39-01-70-B4:30000:1339652639:521848788', sleeping for 30000ms
m30000| Thu Jun 14 01:43:59 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788' acquired, ts : 4fd97a1f894a957359464458
m30000| Thu Jun 14 01:43:59 [conn4] trying to set shard version of 0|0||000000000000000000000000 for 'foo.bar'
m30000| Thu Jun 14 01:43:59 [conn4] verifying cached version 1|0||4fd97a1f591abdbaaebc769a and new version 0|0||000000000000000000000000 for 'foo.bar'
m30000| Thu Jun 14 01:43:59 [conn4] loading new chunks for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1f591abdbaaebc769a and 1 chunks
m30000| Thu Jun 14 01:43:59 [conn4] loaded 1 chunks into new chunk manager for foo.bar with version 1|0||4fd97a1f591abdbaaebc769a
m30000| Thu Jun 14 01:43:59 [conn4] splitChunk accepted at version 1|0||4fd97a1f591abdbaaebc769a
m30000| Thu Jun 14 01:43:59 [conn4] before split on lastmod: 1|0||000000000000000000000000 min: { e: MinKey } max:
m30000|
m30000| Thu Jun 14 01:43:59 [conn4] splitChunk update: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "foo.bar-e_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), ns: "foo.bar", min: { e: MinKey }, max: { e: 0.0 }, shard: "shard0000" }, o2: { _id: "foo.bar-e_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "foo.bar-e_0.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), ns: "foo.bar", min: { e: 0.0 }, max: { e: MaxKey }, shard: "shard0000" }, o2: { _id: "foo.bar-e_0.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "foo.bar" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|0 } } ] }
m30000| Thu Jun 14 01:43:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:59-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60306", time: new Date(1339652639594), what: "split", ns: "foo.bar", details: { before: { min: { e: MinKey }, max: { e: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { e: MinKey }, max: { e: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a1f591abdbaaebc769a') }, right: { min: { e: 0.0 }, max: { e: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a1f591abdbaaebc769a') } } }
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.bar { query: {}, $min: { e: 0.0 }, $max: { e: MaxKey } }
m30000| Thu Jun 14 01:43:59 [conn4] query foo.bar query: { query: {}, $min: { e: 0.0 }, $max: { e: MaxKey } } ntoreturn:2 keyUpdates:0 locks(micros) r:93 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called foo.bar { query: {}, $min: { e: MinKey }, $max: { e: 0.0 } }
m30000| Thu Jun 14 01:43:59 [conn4] query foo.bar query: { query: {}, $min: { e: MinKey }, $max: { e: 0.0 } } ntoreturn:2 keyUpdates:0 locks(micros) r:49 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:43:59 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788' unlocked.
m30000| Thu Jun 14 01:43:59 [conn4] command admin.$cmd command: { splitChunk: "foo.bar", keyPattern: { e: 1.0 }, min: { e: MinKey }, max: { e: MaxKey }, from: "shard0000", splitKeys: [ { e: 0.0 } ], shardId: "foo.bar-e_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:1084 w:2375 reslen:37 5ms
m30999| Thu Jun 14 01:43:59 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|0||4fd97a1f591abdbaaebc769a and 1 chunks
m30999| Thu Jun 14 01:43:59 [conn] loaded 2 chunks into new chunk manager for foo.bar with version 1|2||4fd97a1f591abdbaaebc769a
m30999| Thu Jun 14 01:43:59 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 7 version: 1|2||4fd97a1f591abdbaaebc769a based on: 1|0||4fd97a1f591abdbaaebc769a
{ "ok" : 1 }
m30000| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:146 w:854138 reslen:67 0ms
m30001| Thu Jun 14 01:43:59 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:59 [conn3] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:43:59 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:86 reslen:67 0ms
m30000| Thu Jun 14 01:43:59 [conn4] runQuery called admin.$cmd { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-e_0.0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] run command admin.$cmd { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-e_0.0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-e_0.0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-e_0.0", configdb: "localhost:29000" }
m30000| Thu Jun 14 01:43:59 [conn4] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:43:59 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { e: 0.0 }, to: "shard0001" }
m30999| Thu Jun 14 01:43:59 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { e: 0.0 } max: { e: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m30000| Thu Jun 14 01:43:59 [conn4] about to acquire distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788:
m30000| { "state" : 1,
m30000| "who" : "domU-12-31-39-01-70-B4:30000:1339652639:521848788:conn4:1986361736",
m30000| "process" : "domU-12-31-39-01-70-B4:30000:1339652639:521848788",
m30000| "when" : { "$date" : "Thu Jun 14 01:43:59 2012" },
m30000| "why" : "migrate-{ e: 0.0 }",
m30000| "ts" : { "$oid" : "4fd97a1f894a957359464459" } }
m30000| { "_id" : "foo.bar",
m30000| "state" : 0,
m30000| "ts" : { "$oid" : "4fd97a1f894a957359464458" } }
m30000| Thu Jun 14 01:43:59 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788' acquired, ts : 4fd97a1f894a957359464459
m30000| Thu Jun 14 01:43:59 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:43:59-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60306", time: new Date(1339652639599), what: "moveChunk.start", ns: "foo.bar", details: { min: { e: 0.0 }, max: { e: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:43:59 [conn4] trying to set shard version of 1|2||4fd97a1f591abdbaaebc769a for 'foo.bar'
m30000| Thu Jun 14 01:43:59 [conn4] moveChunk request accepted at version 1|2||4fd97a1f591abdbaaebc769a
m30000| Thu Jun 14 01:43:59 [conn4] moveChunk number of documents: 0
m30000| Thu Jun 14 01:43:59 [conn4] creating new connection to:localhost:30001
m30000| Thu Jun 14 01:43:59 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:43:59 [initandlisten] connection accepted from 127.0.0.1:48895 #9 (9 connections now open)
m30000| Thu Jun 14 01:43:59 [conn4] connected connection!
m30001| Thu Jun 14 01:43:59 [conn9] runQuery called admin.$cmd { _recvChunkStart: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, configServer: "localhost:29000" }
m30001| Thu Jun 14 01:43:59 [conn9] run command admin.$cmd { _recvChunkStart: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, configServer: "localhost:29000" }
m30001| Thu Jun 14 01:43:59 [conn9] command: { _recvChunkStart: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, configServer: "localhost:29000" }
m30001| Thu Jun 14 01:43:59 [conn9] opening db: admin
m30001| Thu Jun 14 01:43:59 [conn9] command admin.$cmd command: { _recvChunkStart: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, configServer: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) w:175 reslen:47 0ms
m30001| Thu Jun 14 01:43:59 [migrateThread] creating new connection to:localhost:30000
m30001| Thu Jun 14 01:43:59 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:43:59 [initandlisten] connection accepted from 127.0.0.1:60321 #11 (11 connections now open)
m30000| Thu Jun 14 01:43:59 [conn11] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn11] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:43:59 [conn11] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:67 0ms
m30001| Thu Jun 14 01:43:59 [migrateThread] connected connection!
m30000| Thu Jun 14 01:43:59 [conn11] runQuery called foo.system.indexes { ns: "foo.bar" }
m30000| Thu Jun 14 01:43:59 [conn11] query foo.system.indexes query: { ns: "foo.bar" } ntoreturn:0 keyUpdates:0 locks(micros) r:62 nreturned:2 reslen:145 0ms
m30001| Thu Jun 14 01:43:59 [migrateThread] opening db: foo
m30001| Thu Jun 14 01:43:59 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:43:59 [FileAllocator] creating directory /data/db/test1/_tmp
m30001| Thu Jun 14 01:43:59 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:43:59 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:43:59 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.318 secs
m30001| Thu Jun 14 01:43:59 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:43:59 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:00 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.354 secs
m30001| Thu Jun 14 01:44:00 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:44:00 [migrateThread] allocExtent foo.system.indexes size 3840 0
m30001| Thu Jun 14 01:44:00 [migrateThread] New namespace: foo.system.indexes
m30001| Thu Jun 14 01:44:00 [migrateThread] allocExtent foo.system.namespaces size 2048 0
m30001| Thu Jun 14 01:44:00 [migrateThread] New namespace: foo.system.namespaces
m30001| Thu Jun 14 01:44:00 [migrateThread] create collection foo.bar {}
m30001| Thu Jun 14 01:44:00 [migrateThread] allocExtent foo.bar size 8192 0
m30001| Thu Jun 14 01:44:00 [migrateThread] adding _id index for collection foo.bar
m30001| Thu Jun 14 01:44:00 [migrateThread] build index foo.bar { _id: 1 }
m30001| mem info: before index start vsize: 168 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:00 [migrateThread] external sort root: /data/db/test1/_tmp/esort.1339652640.0/
m30001| mem info: before final sort vsize: 168 resident: 32 mapped: 32
m30001| mem info: after final sort vsize: 168 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:00 [migrateThread] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:00 [migrateThread] allocExtent foo.bar.$_id_ size 36864 0
m30001| Thu Jun 14 01:44:00 [migrateThread] New namespace: foo.bar.$_id_
m30001| Thu Jun 14 01:44:00 [migrateThread] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:00 [migrateThread] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:00 [migrateThread] New namespace: foo.bar
m30001| Thu Jun 14 01:44:00 [migrateThread] info: creating collection foo.bar on add index
m30001| Thu Jun 14 01:44:00 [migrateThread] build index foo.bar { e: 1.0 }
m30001| mem info: before index start vsize: 168 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:00 [migrateThread] external sort root: /data/db/test1/_tmp/esort.1339652640.1/
m30001| mem info: before final sort vsize: 168 resident: 32 mapped: 32
m30001| mem info: after final sort vsize: 168 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:00 [migrateThread] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:00 [migrateThread] allocExtent foo.bar.$e_1 size 36864 0
m30001| Thu Jun 14 01:44:00 [migrateThread] New namespace: foo.bar.$e_1
m30001| Thu Jun 14 01:44:00 [migrateThread] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:00 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _migrateClone: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _migrateClone: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _migrateClone: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:87 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:94 reslen:51 0ms
m30001| Thu Jun 14 01:44:00 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { e: 0.0 } -> { e: MaxKey }
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:101 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:116 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:132 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:154 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:170 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:187 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:203 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:220 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:242 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:259 reslen:51 0ms
m30001| Thu Jun 14 01:44:00 [FileAllocator] flushing directory /data/db/test1
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:272 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:280 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:288 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:296 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:305 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:313 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:321 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:329 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:337 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:346 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:354 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:362 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:371 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:379 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:386 reslen:51 0ms
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:44:00 [conn9] runQuery called admin.$cmd { _recvChunkStatus: 1 }
m30001| Thu Jun 14 01:44:00 [conn9] run command admin.$cmd { _recvChunkStatus: 1 }
m30001| Thu Jun 14 01:44:00 [conn9] command: { _recvChunkStatus: 1 }
m30001| Thu Jun 14 01:44:00 [conn9] command admin.$cmd command: { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:175 reslen:252 0ms
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk setting version to: 2|0||4fd97a1f591abdbaaebc769a
m30001| Thu Jun 14 01:44:00 [conn9] runQuery called admin.$cmd { _recvChunkCommit: 1 }
m30001| Thu Jun 14 01:44:00 [conn9] run command admin.$cmd { _recvChunkCommit: 1 }
m30001| Thu Jun 14 01:44:00 [conn9] command: { _recvChunkCommit: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] runQuery called admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] run command admin.$cmd { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command: { _transferMods: 1 }
m30000| Thu Jun 14 01:44:00 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:394 reslen:51 0ms
m30001| Thu Jun 14 01:44:00 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { e: 0.0 } -> { e: MaxKey }
m30001| Thu Jun 14 01:44:00 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:00-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652640609), what: "moveChunk.to", ns: "foo.bar", details: { min: { e: 0.0 }, max: { e: MaxKey }, step1 of 5: 693, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 315 } }
m30001| Thu Jun 14 01:44:00 [migrateThread] creating new connection to:localhost:29000
m30001| Thu Jun 14 01:44:00 BackgroundJob starting: ConnectBG
m29000| Thu Jun 14 01:44:00 [initandlisten] connection accepted from 127.0.0.1:44443 #16 (16 connections now open)
m30001| Thu Jun 14 01:44:00 [migrateThread] connected connection!
m30001| Thu Jun 14 01:44:00 [conn9] command admin.$cmd command: { _recvChunkCommit: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:175 reslen:250 11ms
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30000", min: { e: 0.0 }, max: { e: MaxKey }, shardKeyPattern: { e: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk updating self version to: 2|1||4fd97a1f591abdbaaebc769a through { e: MinKey } -> { e: 0.0 } for collection 'foo.bar'
m30000| Thu Jun 14 01:44:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:00-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60306", time: new Date(1339652640614), what: "moveChunk.commit", ns: "foo.bar", details: { min: { e: 0.0 }, max: { e: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Thu Jun 14 01:44:00 [conn4] doing delete inline
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk deleted: 0
m30000| Thu Jun 14 01:44:00 [conn4] moveChunk repl sync took 0 seconds
m30000| Thu Jun 14 01:44:00 [conn4] distributed lock 'foo.bar/domU-12-31-39-01-70-B4:30000:1339652639:521848788' unlocked.
m30000| Thu Jun 14 01:44:00 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:00-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60306", time: new Date(1339652640614), what: "moveChunk.from", ns: "foo.bar", details: { min: { e: 0.0 }, max: { e: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1001, step5 of 6: 12, step6 of 6: 0 } }
m30000| Thu Jun 14 01:44:00 [conn4] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-e_0.0", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:76 r:1160 w:2428 reslen:37 1017ms
m30999| Thu Jun 14 01:44:00 [conn] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:44:00 [conn] loading chunk manager for collection foo.bar using old chunk manager w/ version 1|2||4fd97a1f591abdbaaebc769a and 2 chunks
m30999| Thu Jun 14 01:44:00 [conn] loaded 2 chunks into new chunk manager for foo.bar with version 2|1||4fd97a1f591abdbaaebc769a
m30999| Thu Jun 14 01:44:00 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 2|1||4fd97a1f591abdbaaebc769a based on: 1|2||4fd97a1f591abdbaaebc769a
{ "millis" : 1018, "ok" : 1 }
m30001| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:96 reslen:171 0ms
m30001| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn3] trying to set shard version of 2|0||4fd97a1f591abdbaaebc769a for 'foo.bar'
m30001| Thu Jun 14 01:44:00 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:44:00 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 2|0||4fd97a1f591abdbaaebc769a for 'foo.bar'
m30999| Thu Jun 14 01:44:00 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 2 current: 8 version: 2|0||4fd97a1f591abdbaaebc769a manager: 0x8df2e68
m30999| Thu Jun 14 01:44:00 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), shard: "shard0001", shardHost: "localhost:30001" } 0x8dedd50
m30999| Thu Jun 14 01:44:00 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Thu Jun 14 01:44:00 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 2 current: 8 version: 2|0||4fd97a1f591abdbaaebc769a manager: 0x8df2e68
m30999| Thu Jun 14 01:44:00 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8dedd50
m30001| Thu Jun 14 01:44:00 [conn3] loaded 2 chunks into new chunk manager for foo.bar with version 2|1||4fd97a1f591abdbaaebc769a
m30001| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b591abdbaaebc7690'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:102 reslen:86 2ms
m30001| Thu Jun 14 01:44:00 [conn3] insert foo.bar keyUpdates:0 locks(micros) W:102 w:226 0ms
m30000| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:146 w:854138 reslen:67 0ms
m30001| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { getlasterror: 1 }
m30001| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:102 w:226 reslen:67 0ms
m30001| Thu Jun 14 01:44:00 [conn4] runQuery called admin.$cmd { splitVector: "foo.bar", keyPattern: { e: 1.0 }, min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 524288, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30001| Thu Jun 14 01:44:00 [conn4] run command admin.$cmd { splitVector: "foo.bar", keyPattern: { e: 1.0 }, min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 524288, maxSplitPoints: 2, maxChunkObjects: 250000 }
m30001| Thu Jun 14 01:44:00 [conn4] command admin.$cmd command: { splitVector: "foo.bar", keyPattern: { e: 1.0 }, min: { e: 0.0 }, max: { e: MaxKey }, maxChunkSizeBytes: 524288, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:34 reslen:53 0ms
m30001| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:102 w:226 reslen:67 0ms
m30000| Thu Jun 14 01:44:00 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:00 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:00 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:146 w:854138 reslen:67 0ms
m30999| Thu Jun 14 01:44:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:00 [conn] about to initiate autosplit: ns:foo.bar at: shard0001:localhost:30001 lastmod: 2|0||000000000000000000000000 min: { e: 0.0 } max: { e: MaxKey } dataWritten: 7291820 splitThreshold: 471859
m30999| Thu Jun 14 01:44:00 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:44:00 [conn5] connection sharding metadata does not match for collection foo.bar, will retry (wanted : 2|0||4fd97a1f591abdbaaebc769a, received : 1|0||4fd97a1e591abdbaaebc7698) (queuing writeback)
m30000| Thu Jun 14 01:44:00 [conn5] writeback queued for op: remove len: 46 ns: foo.bar flags: 1 query: { e: "e" }
m30000| Thu Jun 14 01:44:00 [conn5] writing back msg with len: 46 op: 2006
m30000| Thu Jun 14 01:44:00 [conn5] remove foo.bar query: { e: "e" } keyUpdates:0 locks(micros) r:74 w:601 0ms
m30000| Thu Jun 14 01:44:00 [conn5] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:00 [conn5] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:00 [conn5] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) r:74 w:601 reslen:138 0ms
m30998| Thu Jun 14 01:44:00 [conn] delete : { e: "e" } 1 justOne: 1
m30000| Thu Jun 14 01:44:00 [conn6] WriteBackCommand got : { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a200000000000000001'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), msg: BinData }
m30000| Thu Jun 14 01:44:00 [conn6] command admin.$cmd command: { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') } ntoreturn:1 keyUpdates:0 reslen:312 4126ms
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a200000000000000001'), connectionId: 5, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), msg: BinData }, ok: 1.0 }
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:5 writebackId: 4fd97a200000000000000001 needVersion : 2|0||4fd97a1f591abdbaaebc769a mine : 1|0||4fd97a1e591abdbaaebc7698
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] op: remove len: 46 ns: foo.bar flags: 1 query: { e: "e" }
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 2|0||4fd97a1f591abdbaaebc769a but currently have version 1|0||4fd97a1e591abdbaaebc7698
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for foo.bar with version 2|1||4fd97a1f591abdbaaebc769a
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 9 version: 2|1||4fd97a1f591abdbaaebc769a based on: (empty)
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database foo
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] delete : { e: "e" } 1 justOne: 1
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:44:00 [initandlisten] connection accepted from 127.0.0.1:60323 #12 (12 connections now open)
m30998| Thu Jun 14 01:44:00 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] connected connection!
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30000
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:44:00 [conn12] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:44:00 [conn12] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:44:00 [conn12] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30000| Thu Jun 14 01:44:00 [conn12] entering shard mode for connection
m30000| Thu Jun 14 01:44:00 [conn12] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 0 current: 9 version: 2|1||4fd97a1f591abdbaaebc769a manager: 0x9b19c00
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b18d50
m30000| Thu Jun 14 01:44:00 [conn12] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn12] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn12] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn12] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:86 0ms
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:44:00 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] connected connection!
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] initializing shard connection to localhost:30001
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 0 current: 9 version: 2|0||4fd97a1f591abdbaaebc769a manager: 0x9b19c00
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } 0x9b1ccf0
m30001| Thu Jun 14 01:44:00 [initandlisten] connection accepted from 127.0.0.1:48899 #10 (10 connections now open)
m30001| Thu Jun 14 01:44:00 [conn10] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:44:00 [conn10] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:44:00 [conn10] command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true }
m30001| Thu Jun 14 01:44:00 [conn10] entering shard mode for connection
m30001| Thu Jun 14 01:44:00 [conn10] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30001| Thu Jun 14 01:44:00 [conn10] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn10] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn10] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn10] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 reslen:86 0ms
m30998| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:44:00 [conn10] remove foo.bar query: { e: "e" } keyUpdates:0 locks(micros) w:251 0ms
m30000| Thu Jun 14 01:44:00 [conn12] runQuery called admin.$cmd { getLastError: 1 }
m30000| Thu Jun 14 01:44:00 [conn12] run command admin.$cmd { getLastError: 1 }
m30000| Thu Jun 14 01:44:00 [conn12] command admin.$cmd command: { getLastError: 1 } ntoreturn:1 keyUpdates:0 reslen:67 0ms
m30001| Thu Jun 14 01:44:00 [conn10] runQuery called admin.$cmd { getLastError: 1 }
m30001| Thu Jun 14 01:44:00 [conn10] run command admin.$cmd { getLastError: 1 }
m30001| Thu Jun 14 01:44:00 [conn10] command admin.$cmd command: { getLastError: 1 } ntoreturn:1 keyUpdates:0 locks(micros) w:251 reslen:67 0ms
m30000| Thu Jun 14 01:44:00 [conn6] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:44:00 [conn6] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30000| Thu Jun 14 01:44:00 [conn6] command: { writebacklisten: ObjectId('4fd97a1b632292824afda1ed') }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] creating pcursor over QSpec { ns: "foo.bar", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] initializing over 2 shards required by [foo.bar @ 2|1||4fd97a1f591abdbaaebc769a]
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30998| Thu Jun 14 01:44:00 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 8 current: 9 version: 2|1||4fd97a1f591abdbaaebc769a manager: 0x9b19c00
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:44:00 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:74 w:601 reslen:163 0ms
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion failed!
m30998| { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), ns: "foo.bar", need_authoritative: true, errmsg: "verifying drop on 'foo.bar'", ok: 0.0 }
m30998| Thu Jun 14 01:44:00 [conn] have to set shard version for conn: localhost:30000 ns:foo.bar my last seq: 8 current: 9 version: 2|1||4fd97a1f591abdbaaebc769a manager: 0x9b19c00
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x9b1a958
m30000| Thu Jun 14 01:44:00 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:00 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:74 w:601 reslen:86 0ms
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), ok: 1.0 }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] needed to set remote version on connection to value compatible with [foo.bar @ 2|1||4fd97a1f591abdbaaebc769a]
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
m30000| Thu Jun 14 01:44:00 [conn5] runQuery called foo.bar {}
m30000| Thu Jun 14 01:44:00 [conn5] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) r:99 w:601 nreturned:0 reslen:20 0ms
m30998| Thu Jun 14 01:44:00 [conn] have to set shard version for conn: localhost:30001 ns:foo.bar my last seq: 2 current: 9 version: 2|0||4fd97a1f591abdbaaebc769a manager: 0x9b19c00
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion shard0001 localhost:30001 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } 0xb2d00570
m30001| Thu Jun 14 01:44:00 [conn5] runQuery called admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn5] run command admin.$cmd { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn5] command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:00 [conn5] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), serverID: ObjectId('4fd97a1b632292824afda1ed'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:4 reslen:86 0ms
m30998| Thu Jun 14 01:44:00 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] needed to set remote version on connection to value compatible with [foo.bar @ 2|1||4fd97a1f591abdbaaebc769a]
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] finishing over 2 shards
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
m30001| Thu Jun 14 01:44:00 [conn5] runQuery called foo.bar {}
m30001| Thu Jun 14 01:44:00 [conn5] query foo.bar ntoreturn:1 keyUpdates:0 locks(micros) W:4 r:46 nreturned:0 reslen:20 0ms
m30998| Thu Jun 14 01:44:00 [conn] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "foo.bar @ 2|1||4fd97a1f591abdbaaebc769a", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
m30000| Thu Jun 14 01:44:00 [conn7] connection sharding metadata does not match for collection foo.bar, will retry (wanted : 2|0||4fd97a1f591abdbaaebc769a, received : 1|0||4fd97a1e591abdbaaebc7698) (queuing writeback)
m30000| Thu Jun 14 01:44:00 [conn7] writeback queued for op: remove len: 46 ns: foo.bar flags: 1 query: { d: "d" }
m30000| Thu Jun 14 01:44:00 [conn7] writing back msg with len: 46 op: 2006
m30000| Thu Jun 14 01:44:00 [conn7] remove foo.bar query: { d: "d" } keyUpdates:0 locks(micros) W:15 r:78 w:180 0ms
m30000| Thu Jun 14 01:44:00 [conn7] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:00 [conn7] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:00 [conn7] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:78 w:180 reslen:138 0ms
m30000| Thu Jun 14 01:44:00 [conn10] WriteBackCommand got : { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a200000000000000002'), connectionId: 7, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), msg: BinData }
m30000| Thu Jun 14 01:44:00 [conn10] command admin.$cmd command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') } ntoreturn:1 keyUpdates:0 locks(micros) r:10 reslen:312 3098ms
m30997| Thu Jun 14 01:44:00 [conn] delete : { d: "d" } 1 justOne: 1
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] writebacklisten result: { data: { writeBack: true, ns: "foo.bar", id: ObjectId('4fd97a200000000000000002'), connectionId: 7, instanceIdent: "domU-12-31-39-01-70-B4:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a1f591abdbaaebc769a'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a1e591abdbaaebc7698'), msg: BinData }, ok: 1.0 }
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] connectionId: domU-12-31-39-01-70-B4:30000:7 writebackId: 4fd97a200000000000000002 needVersion : 2|0||4fd97a1f591abdbaaebc769a mine : 1|0||4fd97a1e591abdbaaebc7698
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] op: remove len: 46 ns: foo.bar flags: 1 query: { d: "d" }
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] new version change detected to 2|0||4fd97a1f591abdbaaebc769a, 1 writebacks processed at 1|0||4fd97a1c591abdbaaebc7694
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] warning: reloading config data for foo, wanted version 2|0||4fd97a1f591abdbaaebc769a but currently have version 1|0||4fd97a1e591abdbaaebc7698
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] loaded 2 chunks into new chunk manager for foo.bar with version 2|1||4fd97a1f591abdbaaebc769a
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 8 version: 2|1||4fd97a1f591abdbaaebc769a based on: (empty)
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] found 0 dropped collections and 1 sharded collections for database foo
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] delete : { d: "d" } 2 justOne: 1
m30997| Thu Jun 14 01:44:00 [WriteBackListener-localhost:30000] delete : { d: "d" } 2 justOne: 1
m30001| Thu Jun 14 01:44:00 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.646 secs
m30000| Thu Jun 14 01:44:01 [conn10] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:44:01 [conn10] run command admin.$cmd { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30000| Thu Jun 14 01:44:01 [conn10] command: { writebacklisten: ObjectId('4fd97a1b3fa2ba75ec315063') }
m30997| Thu Jun 14 01:44:01 [WriteBackListener-localhost:30000] ERROR: error processing writeback: 8015 can only delete with a non-shard key pattern if can delete as many as we find : { d: "d" }
m30999| Thu Jun 14 01:44:01 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Thu Jun 14 01:44:01 [conn3] end connection 127.0.0.1:44405 (15 connections now open)
m30000| Thu Jun 14 01:44:01 [conn3] SocketException: remote: 127.0.0.1:60303 error: 9001 socket exception [0] server [127.0.0.1:60303]
m30000| Thu Jun 14 01:44:01 [conn3] end connection 127.0.0.1:60303 (11 connections now open)
m30001| Thu Jun 14 01:44:01 [conn3] SocketException: remote: 127.0.0.1:48879 error: 9001 socket exception [0] server [127.0.0.1:48879]
m30001| Thu Jun 14 01:44:01 [conn3] end connection 127.0.0.1:48879 (9 connections now open)
m30000| Thu Jun 14 01:44:01 [conn4] SocketException: remote: 127.0.0.1:60306 error: 9001 socket exception [0] server [127.0.0.1:60306]
m30000| Thu Jun 14 01:44:01 [conn4] end connection 127.0.0.1:60306 (10 connections now open)
m29000| Thu Jun 14 01:44:01 [conn4] end connection 127.0.0.1:44408 (14 connections now open)
m29000| Thu Jun 14 01:44:01 [conn13] end connection 127.0.0.1:44426 (14 connections now open)
m29000| Thu Jun 14 01:44:01 [conn5] end connection 127.0.0.1:44409 (12 connections now open)
m30001| Thu Jun 14 01:44:01 [conn4] SocketException: remote: 127.0.0.1:48882 error: 9001 socket exception [0] server [127.0.0.1:48882]
m30001| Thu Jun 14 01:44:01 [conn4] end connection 127.0.0.1:48882 (8 connections now open)
Thu Jun 14 01:44:02 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:44:02 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:44:02 [conn5] SocketException: remote: 127.0.0.1:60309 error: 9001 socket exception [0] server [127.0.0.1:60309]
m30000| Thu Jun 14 01:44:02 [conn5] end connection 127.0.0.1:60309 (9 connections now open)
m30000| Thu Jun 14 01:44:02 [conn12] SocketException: remote: 127.0.0.1:60323 error: 9001 socket exception [0] server [127.0.0.1:60323]
m30000| Thu Jun 14 01:44:02 [conn12] end connection 127.0.0.1:60323 (8 connections now open)
m30001| Thu Jun 14 01:44:02 [conn5] SocketException: remote: 127.0.0.1:48885 error: 9001 socket exception [0] server [127.0.0.1:48885]
m30001| Thu Jun 14 01:44:02 [conn5] end connection 127.0.0.1:48885 (7 connections now open)
m29000| Thu Jun 14 01:44:02 [conn6] end connection 127.0.0.1:44412 (11 connections now open)
m29000| Thu Jun 14 01:44:02 [conn7] end connection 127.0.0.1:44413 (10 connections now open)
m29000| Thu Jun 14 01:44:02 [conn8] end connection 127.0.0.1:44414 (9 connections now open)
m30000| Thu Jun 14 01:44:02 [conn9] SocketException: remote: 127.0.0.1:60317 error: 9001 socket exception [0] server [127.0.0.1:60317]
m30000| Thu Jun 14 01:44:02 [conn9] end connection 127.0.0.1:60317 (7 connections now open)
m30001| Thu Jun 14 01:44:02 [conn10] SocketException: remote: 127.0.0.1:48899 error: 9001 socket exception [0] server [127.0.0.1:48899]
m30001| Thu Jun 14 01:44:02 [conn10] end connection 127.0.0.1:48899 (6 connections now open)
Thu Jun 14 01:44:03 shell: stopped mongo program on port 30998
m30997| Thu Jun 14 01:44:03 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:44:03 [conn7] SocketException: remote: 127.0.0.1:60313 error: 9001 socket exception [0] server [127.0.0.1:60313]
m30000| Thu Jun 14 01:44:03 [conn7] end connection 127.0.0.1:60313 (6 connections now open)
m30001| Thu Jun 14 01:44:03 [conn7] SocketException: remote: 127.0.0.1:48889 error: 9001 socket exception [0] server [127.0.0.1:48889]
m30001| Thu Jun 14 01:44:03 [conn7] end connection 127.0.0.1:48889 (5 connections now open)
m30000| Thu Jun 14 01:44:03 [conn8] SocketException: remote: 127.0.0.1:60315 error: 9001 socket exception [0] server [127.0.0.1:60315]
m30000| Thu Jun 14 01:44:03 [conn8] end connection 127.0.0.1:60315 (5 connections now open)
m29000| Thu Jun 14 01:44:03 [conn10] end connection 127.0.0.1:44418 (8 connections now open)
m29000| Thu Jun 14 01:44:03 [conn9] end connection 127.0.0.1:44417 (8 connections now open)
m29000| Thu Jun 14 01:44:03 [conn11] end connection 127.0.0.1:44419 (6 connections now open)
m29000| Thu Jun 14 01:44:03 [conn12] end connection 127.0.0.1:44420 (5 connections now open)
Thu Jun 14 01:44:04 shell: stopped mongo program on port 30997
m30000| Thu Jun 14 01:44:04 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:44:04 [interruptThread] now exiting
m30000| Thu Jun 14 01:44:04 dbexit:
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:44:04 [interruptThread] closing listening socket: 10
m30000| Thu Jun 14 01:44:04 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:44:04 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:44:04 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:44:04 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:44:04 [interruptThread] shutdown: groupCommitMutex
m30000| Thu Jun 14 01:44:04 dbexit: really exiting now
m29000| Thu Jun 14 01:44:04 [conn15] end connection 127.0.0.1:44440 (4 connections now open)
m29000| Thu Jun 14 01:44:04 [conn14] end connection 127.0.0.1:44429 (3 connections now open)
m30001| Thu Jun 14 01:44:04 [conn9] SocketException: remote: 127.0.0.1:48895 error: 9001 socket exception [0] server [127.0.0.1:48895]
m30001| Thu Jun 14 01:44:04 [conn9] end connection 127.0.0.1:48895 (4 connections now open)
Thu Jun 14 01:44:05 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:44:05 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:44:05 [interruptThread] now exiting
m30001| Thu Jun 14 01:44:05 dbexit:
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:44:05 [interruptThread] closing listening socket: 13
m30001| Thu Jun 14 01:44:05 [interruptThread] closing listening socket: 14
m30001| Thu Jun 14 01:44:05 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:44:05 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:44:05 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:44:05 [interruptThread] shutdown: groupCommitMutex
m30001| Thu Jun 14 01:44:05 dbexit: really exiting now
m29000| Thu Jun 14 01:44:05 [conn16] end connection 127.0.0.1:44443 (2 connections now open)
Thu Jun 14 01:44:06 shell: stopped mongo program on port 30001
m29000| Thu Jun 14 01:44:06 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Thu Jun 14 01:44:06 [interruptThread] now exiting
m29000| Thu Jun 14 01:44:06 dbexit:
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: going to close listening sockets...
m29000| Thu Jun 14 01:44:06 [interruptThread] closing listening socket: 17
m29000| Thu Jun 14 01:44:06 [interruptThread] closing listening socket: 18
m29000| Thu Jun 14 01:44:06 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: going to flush diaglog...
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: going to close sockets...
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: closing all files...
m29000| Thu Jun 14 01:44:06 [interruptThread] closeAllFiles() finished
m29000| Thu Jun 14 01:44:06 [interruptThread] shutdown: removing fs lock...
m29000| Thu Jun 14 01:44:06 dbexit: really exiting now
Thu Jun 14 01:44:07 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 13.762 seconds ***
13862.787008ms
Thu Jun 14 01:44:07 [initandlisten] connection accepted from 127.0.0.1:34981 #49 (3 connections now open)
*******************************************
Test : movePrimary1.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/movePrimary1.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/movePrimary1.js";TestData.testFile = "movePrimary1.js";TestData.testName = "movePrimary1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:44:07 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/movePrimary10'
Thu Jun 14 01:44:07 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/movePrimary10
m30000| Thu Jun 14 01:44:07
m30000| Thu Jun 14 01:44:07 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:44:07
m30000| Thu Jun 14 01:44:07 [initandlisten] MongoDB starting : pid=27414 port=30000 dbpath=/data/db/movePrimary10 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:44:07 [initandlisten]
m30000| Thu Jun 14 01:44:07 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:44:07 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:44:07 [initandlisten]
m30000| Thu Jun 14 01:44:07 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:44:07 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:44:07 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:44:07 [initandlisten]
m30000| Thu Jun 14 01:44:07 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:44:07 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:44:07 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:44:07 [initandlisten] options: { dbpath: "/data/db/movePrimary10", port: 30000 }
m30000| Thu Jun 14 01:44:07 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:44:07 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/movePrimary11'
m30000| Thu Jun 14 01:44:07 [initandlisten] connection accepted from 127.0.0.1:60327 #1 (1 connection now open)
Thu Jun 14 01:44:07 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/movePrimary11
m30001| Thu Jun 14 01:44:08
m30001| Thu Jun 14 01:44:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:44:08
m30001| Thu Jun 14 01:44:08 [initandlisten] MongoDB starting : pid=27427 port=30001 dbpath=/data/db/movePrimary11 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:44:08 [initandlisten]
m30001| Thu Jun 14 01:44:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:44:08 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:44:08 [initandlisten]
m30001| Thu Jun 14 01:44:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:44:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:44:08 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:44:08 [initandlisten]
m30001| Thu Jun 14 01:44:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:44:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:44:08 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:44:08 [initandlisten] options: { dbpath: "/data/db/movePrimary11", port: 30001 }
m30001| Thu Jun 14 01:44:08 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:44:08 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30000| Thu Jun 14 01:44:08 [initandlisten] connection accepted from 127.0.0.1:60330 #2 (2 connections now open)
ShardingTest movePrimary1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:44:08 [FileAllocator] allocating new datafile /data/db/movePrimary10/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:08 [FileAllocator] creating directory /data/db/movePrimary10/_tmp
Thu Jun 14 01:44:08 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000
m30001| Thu Jun 14 01:44:08 [initandlisten] connection accepted from 127.0.0.1:48904 #1 (1 connection now open)
m30999| Thu Jun 14 01:44:08 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:44:08 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27441 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:44:08 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:44:08 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:44:08 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Thu Jun 14 01:44:08 [initandlisten] connection accepted from 127.0.0.1:60332 #3 (3 connections now open)
m30000| Thu Jun 14 01:44:08 [FileAllocator] done allocating datafile /data/db/movePrimary10/config.ns, size: 16MB, took 0.303 secs
m30000| Thu Jun 14 01:44:08 [FileAllocator] allocating new datafile /data/db/movePrimary10/config.0, filling with zeroes...
m30000| Thu Jun 14 01:44:08 [FileAllocator] done allocating datafile /data/db/movePrimary10/config.0, size: 16MB, took 0.337 secs
m30000| Thu Jun 14 01:44:08 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn2] insert config.settings keyUpdates:0 locks(micros) w:660731 660ms
m30000| Thu Jun 14 01:44:08 [initandlisten] connection accepted from 127.0.0.1:60336 #4 (4 connections now open)
m30000| Thu Jun 14 01:44:08 [FileAllocator] allocating new datafile /data/db/movePrimary10/config.1, filling with zeroes...
m30000| Thu Jun 14 01:44:08 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:08 [mongosMain] waiting for connections on port 30999
m30000| Thu Jun 14 01:44:08 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:44:08 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:44:08 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:08 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:44:08 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:44:08 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:44:08 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:44:08
m30999| Thu Jun 14 01:44:08 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:44:08 [initandlisten] connection accepted from 127.0.0.1:60337 #5 (5 connections now open)
m30000| Thu Jun 14 01:44:08 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:08 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652648:1804289383 (sleeping for 30000ms)
m30000| Thu Jun 14 01:44:08 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:44:08 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:08 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:44:08 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:44:08 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' acquired, ts : 4fd97a286d3ba518e82a585c
m30999| Thu Jun 14 01:44:08 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:44:09 [mongosMain] connection accepted from 127.0.0.1:54400 #1 (1 connection now open)
m30999| Thu Jun 14 01:44:09 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:44:09 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:44:09 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:09 [conn] put [admin] on: config:localhost:30000
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30000| Thu Jun 14 01:44:09 [FileAllocator] done allocating datafile /data/db/movePrimary10/config.1, size: 32MB, took 0.605 secs
m30000| Thu Jun 14 01:44:09 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:397 w:1720 reslen:177 447ms
m30999| Thu Jun 14 01:44:09 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
m30001| Thu Jun 14 01:44:09 [initandlisten] connection accepted from 127.0.0.1:48914 #2 (2 connections now open)
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:44:09 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
m30999| Thu Jun 14 01:44:09 [conn] couldn't find database [test1] in config db
m30999| Thu Jun 14 01:44:09 [conn] put [test1] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:44:09 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a286d3ba518e82a585b
m30999| Thu Jun 14 01:44:09 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a286d3ba518e82a585b
m30000| Thu Jun 14 01:44:09 [initandlisten] connection accepted from 127.0.0.1:60340 #6 (6 connections now open)
m30001| Thu Jun 14 01:44:09 [initandlisten] connection accepted from 127.0.0.1:48916 #3 (3 connections now open)
m30001| Thu Jun 14 01:44:09 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.ns, filling with zeroes...
m30001| Thu Jun 14 01:44:09 [FileAllocator] creating directory /data/db/movePrimary11/_tmp
m30001| Thu Jun 14 01:44:09 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.ns, size: 16MB, took 0.356 secs
m30001| Thu Jun 14 01:44:09 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.0, filling with zeroes...
m30001| Thu Jun 14 01:44:10 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.0, size: 16MB, took 0.421 secs
m30001| Thu Jun 14 01:44:10 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.1, filling with zeroes...
m30001| Thu Jun 14 01:44:10 [conn3] build index test1.foo { _id: 1 }
m30001| Thu Jun 14 01:44:10 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:10 [conn3] insert test1.foo keyUpdates:0 locks(micros) W:74 w:790749 790ms
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test1", "partitioned" : false, "primary" : "shard0001" }
m30001| Thu Jun 14 01:44:10 [initandlisten] connection accepted from 127.0.0.1:48917 #4 (4 connections now open)
m30999| Thu Jun 14 01:44:10 [conn] Moving test1 primary from: shard0001:localhost:30001 to: shard0000:localhost:30000
m30999| Thu Jun 14 01:44:10 [conn] created new distributed lock for test1-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:10 [conn] distributed lock 'test1-movePrimary/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' acquired, ts : 4fd97a2a6d3ba518e82a585d
m30000| Thu Jun 14 01:44:10 [FileAllocator] allocating new datafile /data/db/movePrimary10/test1.ns, filling with zeroes...
m30001| Thu Jun 14 01:44:11 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.1, size: 32MB, took 1.174 secs
m30000| Thu Jun 14 01:44:11 [FileAllocator] done allocating datafile /data/db/movePrimary10/test1.ns, size: 16MB, took 1.163 secs
m30000| Thu Jun 14 01:44:11 [FileAllocator] allocating new datafile /data/db/movePrimary10/test1.0, filling with zeroes...
m30000| Thu Jun 14 01:44:11 [FileAllocator] done allocating datafile /data/db/movePrimary10/test1.0, size: 16MB, took 0.275 secs
m30000| Thu Jun 14 01:44:11 [FileAllocator] allocating new datafile /data/db/movePrimary10/test1.1, filling with zeroes...
m30000| Thu Jun 14 01:44:11 [conn5] build index test1.foo { _id: 1 }
m30000| Thu Jun 14 01:44:11 [conn5] fastBuildIndex dupsToDrop:0
m30000| Thu Jun 14 01:44:11 [conn5] build index done. scanned 3 total records. 0 secs
m30000| Thu Jun 14 01:44:11 [conn5] command test1.$cmd command: { clone: "localhost:30001", collsToIgnore: {} } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:27 r:360 w:1450549 reslen:72 1450ms
m30999| Thu Jun 14 01:44:11 [conn] movePrimary dropping database on localhost:30001, no sharded collections in test1
m30001| Thu Jun 14 01:44:11 [conn4] end connection 127.0.0.1:48917 (3 connections now open)
m30001| Thu Jun 14 01:44:11 [initandlisten] connection accepted from 127.0.0.1:48918 #5 (4 connections now open)
m30001| Thu Jun 14 01:44:11 [conn5] dropDatabase test1
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test1", "partitioned" : false, "primary" : "shard0000" }
m30999| Thu Jun 14 01:44:11 [conn] distributed lock 'test1-movePrimary/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' unlocked.
m30999| Thu Jun 14 01:44:11 [conn] Moving test1 primary from: shard0000:localhost:30000 to: shard0001:localhost:30001
m30999| Thu Jun 14 01:44:11 [conn] created new distributed lock for test1-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:11 [conn] distributed lock 'test1-movePrimary/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' acquired, ts : 4fd97a2b6d3ba518e82a585e
m30000| Thu Jun 14 01:44:11 [initandlisten] connection accepted from 127.0.0.1:60344 #7 (7 connections now open)
m30001| Thu Jun 14 01:44:11 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:12 [FileAllocator] done allocating datafile /data/db/movePrimary10/test1.1, size: 32MB, took 0.903 secs
m30001| Thu Jun 14 01:44:12 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.ns, size: 16MB, took 0.876 secs
m30001| Thu Jun 14 01:44:12 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.0, filling with zeroes...
m30001| Thu Jun 14 01:44:12 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.0, size: 16MB, took 0.307 secs
m30001| Thu Jun 14 01:44:12 [FileAllocator] allocating new datafile /data/db/movePrimary11/test1.1, filling with zeroes...
m30001| Thu Jun 14 01:44:12 [conn5] build index test1.foo { _id: 1 }
m30001| Thu Jun 14 01:44:12 [conn5] fastBuildIndex dupsToDrop:0
m30001| Thu Jun 14 01:44:12 [conn5] build index done. scanned 3 total records. 0 secs
m30001| Thu Jun 14 01:44:12 [conn5] command test1.$cmd command: { clone: "localhost:30000", collsToIgnore: {} } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:18427 w:1193447 reslen:72 1194ms
m30999| Thu Jun 14 01:44:12 [conn] movePrimary dropping database on localhost:30000, no sharded collections in test1
m30000| Thu Jun 14 01:44:12 [conn7] end connection 127.0.0.1:60344 (6 connections now open)
m30000| Thu Jun 14 01:44:12 [conn5] dropDatabase test1
m30999| Thu Jun 14 01:44:12 [conn] distributed lock 'test1-movePrimary/domU-12-31-39-01-70-B4:30999:1339652648:1804289383' unlocked.
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test1", "partitioned" : false, "primary" : "shard0001" }
m30999| Thu Jun 14 01:44:12 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:44:12 [conn3] end connection 127.0.0.1:60332 (5 connections now open)
m30000| Thu Jun 14 01:44:12 [conn5] end connection 127.0.0.1:60337 (4 connections now open)
m30000| Thu Jun 14 01:44:12 [conn6] end connection 127.0.0.1:60340 (4 connections now open)
m30001| Thu Jun 14 01:44:12 [conn5] end connection 127.0.0.1:48918 (3 connections now open)
m30001| Thu Jun 14 01:44:12 [conn3] end connection 127.0.0.1:48916 (3 connections now open)
m30001| Thu Jun 14 01:44:13 [FileAllocator] done allocating datafile /data/db/movePrimary11/test1.1, size: 32MB, took 0.795 secs
Thu Jun 14 01:44:13 shell: stopped mongo program on port 30999
m30000| Thu Jun 14 01:44:13 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:44:13 [interruptThread] now exiting
m30000| Thu Jun 14 01:44:13 dbexit:
m30000| Thu Jun 14 01:44:13 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:44:13 [interruptThread] closing listening socket: 11
m30000| Thu Jun 14 01:44:13 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:44:13 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:44:13 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:44:13 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:44:13 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:44:13 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:44:13 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:44:14 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:44:14 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:44:14 dbexit: really exiting now
Thu Jun 14 01:44:14 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:44:14 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:44:14 [interruptThread] now exiting
m30001| Thu Jun 14 01:44:14 dbexit:
m30001| Thu Jun 14 01:44:14 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:44:14 [interruptThread] closing listening socket: 14
m30001| Thu Jun 14 01:44:14 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:44:14 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:44:14 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:44:14 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:44:14 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:44:14 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:44:14 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:44:15 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:44:15 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:44:15 dbexit: really exiting now
Thu Jun 14 01:44:16 shell: stopped mongo program on port 30001
*** ShardingTest movePrimary1 completed successfully in 8.255 seconds ***
8313.611031ms
Thu Jun 14 01:44:16 [initandlisten] connection accepted from 127.0.0.1:35001 #50 (4 connections now open)
*******************************************
Test : moveprimary_ignore_sharded.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/moveprimary_ignore_sharded.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/moveprimary_ignore_sharded.js";TestData.testFile = "moveprimary_ignore_sharded.js";TestData.testName = "moveprimary_ignore_sharded";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:44:16 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Thu Jun 14 01:44:16 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Thu Jun 14 01:44:16
m30000| Thu Jun 14 01:44:16 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:44:16
m30000| Thu Jun 14 01:44:16 [initandlisten] MongoDB starting : pid=27480 port=30000 dbpath=/data/db/test0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:44:16 [initandlisten]
m30000| Thu Jun 14 01:44:16 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:44:16 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:44:16 [initandlisten]
m30000| Thu Jun 14 01:44:16 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:44:16 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:44:16 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:44:16 [initandlisten]
m30000| Thu Jun 14 01:44:16 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:44:16 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:44:16 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:44:16 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Thu Jun 14 01:44:16 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:44:16 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/test1'
Thu Jun 14 01:44:16 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/test1
m30000| Thu Jun 14 01:44:16 [initandlisten] connection accepted from 127.0.0.1:60347 #1 (1 connection now open)
m30001| Thu Jun 14 01:44:16
m30001| Thu Jun 14 01:44:16 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:44:16
m30001| Thu Jun 14 01:44:16 [initandlisten] MongoDB starting : pid=27492 port=30001 dbpath=/data/db/test1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:44:16 [initandlisten]
m30001| Thu Jun 14 01:44:16 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:44:16 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:44:16 [initandlisten]
m30001| Thu Jun 14 01:44:16 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:44:16 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:44:16 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:44:16 [initandlisten]
m30001| Thu Jun 14 01:44:16 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:44:16 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:44:16 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:44:16 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Thu Jun 14 01:44:16 [initandlisten] waiting for connections on port 30001
m30001| Thu Jun 14 01:44:16 [websvr] admin web console waiting for connections on port 31001
"localhost:30000"
m30001| Thu Jun 14 01:44:16 [initandlisten] connection accepted from 127.0.0.1:48924 #1 (1 connection now open)
ShardingTest test :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
Thu Jun 14 01:44:16 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:44:16 [initandlisten] connection accepted from 127.0.0.1:60350 #2 (2 connections now open)
m30000| Thu Jun 14 01:44:16 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:16 [FileAllocator] creating directory /data/db/test0/_tmp
m30000| Thu Jun 14 01:44:16 [initandlisten] connection accepted from 127.0.0.1:60352 #3 (3 connections now open)
m30999| Thu Jun 14 01:44:16 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:44:16 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27507 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:44:16 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:44:16 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:44:16 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:44:16 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:44:16 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:16 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:16 [mongosMain] connected connection!
m30000| Thu Jun 14 01:44:16 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0.284 secs
m30000| Thu Jun 14 01:44:16 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:44:17 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 16MB, took 0.263 secs
m30000| Thu Jun 14 01:44:17 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:44:17 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn2] insert config.settings keyUpdates:0 locks(micros) w:558829 558ms
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60355 #4 (4 connections now open)
m30000| Thu Jun 14 01:44:17 [conn4] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:44:17 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:44:17 [conn3] build index config.shards { host: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn4] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60356 #5 (5 connections now open)
m30000| Thu Jun 14 01:44:17 [conn5] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [conn3] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:44:17 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:17 [mongosMain] connected connection!
m30999| Thu Jun 14 01:44:17 [mongosMain] MaxChunkSize: 50
m30999| Thu Jun 14 01:44:17 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:44:17 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:44:17 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:44:17 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:44:17 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: PeriodicTask::Runner
m30999| Thu Jun 14 01:44:17 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:44:17 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:44:17
m30999| Thu Jun 14 01:44:17 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:17 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:17 [Balancer] connected connection!
m30999| Thu Jun 14 01:44:17 [Balancer] Refreshing MaxChunkSize: 50
m30999| Thu Jun 14 01:44:17 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:44:17 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652657:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652657:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652657:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:17 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a310955c08e55c3f85a" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:44:17 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652657:1804289383' acquired, ts : 4fd97a310955c08e55c3f85a
m30999| Thu Jun 14 01:44:17 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:44:17 [Balancer] no collections to balance
m30999| Thu Jun 14 01:44:17 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:44:17 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:44:17 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652657:1804289383' unlocked.
m30999| Thu Jun 14 01:44:17 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652657:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:44:17 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:44:17 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652657:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:44:17 [mongosMain] connection accepted from 127.0.0.1:54419 #1 (1 connection now open)
Thu Jun 14 01:44:17 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30998 --configdb localhost:30000 -v
m30998| Thu Jun 14 01:44:17 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30998| Thu Jun 14 01:44:17 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27527 port=30998 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30998| Thu Jun 14 01:44:17 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30998| Thu Jun 14 01:44:17 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30998| Thu Jun 14 01:44:17 [mongosMain] options: { configdb: "localhost:30000", port: 30998, verbose: true }
m30998| Thu Jun 14 01:44:17 [mongosMain] config string : localhost:30000
m30998| Thu Jun 14 01:44:17 [mongosMain] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60358 #6 (6 connections now open)
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60360 #7 (7 connections now open)
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60361 #8 (8 connections now open)
m30998| Thu Jun 14 01:44:17 [mongosMain] connected connection!
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: CheckConfigServers
m30998| Thu Jun 14 01:44:17 [CheckConfigServers] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:44:17 [mongosMain] MaxChunkSize: 50
m30998| Thu Jun 14 01:44:17 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:44:17 [mongosMain] waiting for connections on port 30998
m30998| Thu Jun 14 01:44:17 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30998| Thu Jun 14 01:44:17 [websvr] admin web console waiting for connections on port 31998
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: Balancer
m30998| Thu Jun 14 01:44:17 [Balancer] about to contact config servers and shards
m30998| Thu Jun 14 01:44:17 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: cursorTimeout
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: PeriodicTask::Runner
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:44:17 [Balancer] connected connection!
m30998| Thu Jun 14 01:44:17 [Balancer] config servers and shards contacted successfully
m30998| Thu Jun 14 01:44:17 [Balancer] balancer id: domU-12-31-39-01-70-B4:30998 started at Jun 14 01:44:17
m30998| Thu Jun 14 01:44:17 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30998| Thu Jun 14 01:44:17 [Balancer] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30998| Thu Jun 14 01:44:17 [Balancer] connected connection!
m30998| Thu Jun 14 01:44:17 [Balancer] Refreshing MaxChunkSize: 50
m30998| Thu Jun 14 01:44:17 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652657:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339652657:1804289383:Balancer:846930886",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339652657:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:44:17 2012" },
m30998| "why" : "doing balance round",
m30998| "ts" : { "$oid" : "4fd97a311d2308d2e916ca01" } }
m30998| { "_id" : "balancer",
m30998| "state" : 0,
m30998| "ts" : { "$oid" : "4fd97a310955c08e55c3f85a" } }
m30998| Thu Jun 14 01:44:17 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652657:1804289383' acquired, ts : 4fd97a311d2308d2e916ca01
m30998| Thu Jun 14 01:44:17 [Balancer] *** start balancing round
m30998| Thu Jun 14 01:44:17 [Balancer] no collections to balance
m30998| Thu Jun 14 01:44:17 [Balancer] no need to move any chunk
m30998| Thu Jun 14 01:44:17 [Balancer] *** end of balancing round
m30998| Thu Jun 14 01:44:17 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30998:1339652657:1804289383' unlocked.
m30998| Thu Jun 14 01:44:17 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30998:1339652657:1804289383 (sleeping for 30000ms)
m30998| Thu Jun 14 01:44:17 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:44:17 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30998:1339652657:1804289383', sleeping for 30000ms
m30998| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60362 #9 (9 connections now open)
m30998| Thu Jun 14 01:44:17 [CheckConfigServers] connected connection!
m30998| Thu Jun 14 01:44:17 [mongosMain] connection accepted from 127.0.0.1:42242 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:44:17 [conn] couldn't find database [admin] in config db
m30999| Thu Jun 14 01:44:17 [conn] put [admin] on: config:localhost:30000
m30000| Thu Jun 14 01:44:17 [conn3] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:17 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 32MB, took 0.709 secs
m30000| Thu Jun 14 01:44:17 [conn4] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:315 w:1464 reslen:177 444ms
m30999| Thu Jun 14 01:44:17 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:48939 #2 (2 connections now open)
m30999| Thu Jun 14 01:44:17 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:17 [conn] connected connection!
m30999| Thu Jun 14 01:44:17 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30001| Thu Jun 14 01:44:17 [conn2] runQuery called admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:44:17 [conn2] run command admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:44:17 [conn2] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:19 reslen:1550 0ms
m30001| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:48941 #3 (3 connections now open)
m30001| Thu Jun 14 01:44:17 [conn3] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30001| Thu Jun 14 01:44:17 [conn3] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30001| Thu Jun 14 01:44:17 [conn3] command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30001| Thu Jun 14 01:44:17 [conn3] entering shard mode for connection
m30001| Thu Jun 14 01:44:17 [conn3] adding sharding hook
m30001| Thu Jun 14 01:44:17 [conn3] config string : localhost:30000
m30001| Thu Jun 14 01:44:17 [conn3] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:258 reslen:51 0ms
m30001| Thu Jun 14 01:44:17 [conn3] opening db: foo
m30001| Thu Jun 14 01:44:17 [conn2] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30000| Thu Jun 14 01:44:17 [conn3] runQuery called config.databases { _id: "foo" }
m30000| Thu Jun 14 01:44:17 [conn3] query config.databases query: { _id: "foo" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1827 w:4340 reslen:20 0ms
m30000| Thu Jun 14 01:44:17 [conn4] runQuery called config.databases { _id: /^foo$/i }
m30000| Thu Jun 14 01:44:17 [conn4] query config.databases query: { _id: /^foo$/i } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:1015 w:1567 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:44:17 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:1031 w:1567 reslen:1713 0ms
m30000| Thu Jun 14 01:44:17 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:17 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:1046 w:1567 reslen:1713 0ms
m30000| Thu Jun 14 01:44:17 [conn3] update config.databases query: { _id: "foo" } update: { _id: "foo", partitioned: false, primary: "shard0001" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1827 w:4394 0ms
m30000| Thu Jun 14 01:44:17 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:17 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1827 w:4394 reslen:85 0ms
m30001| Thu Jun 14 01:44:17 [conn2] run command admin.$cmd { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30001| Thu Jun 14 01:44:17 [conn2] command: { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30000| Thu Jun 14 01:44:17 [initandlisten] connection accepted from 127.0.0.1:60365 #10 (10 connections now open)
m30000| Thu Jun 14 01:44:17 [conn10] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30000| Thu Jun 14 01:44:17 [conn10] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30000| Thu Jun 14 01:44:17 [conn10] command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true }
m30000| Thu Jun 14 01:44:17 [conn10] entering shard mode for connection
m30000| Thu Jun 14 01:44:17 [conn10] adding sharding hook
m30000| Thu Jun 14 01:44:17 [conn10] config string : localhost:30000
m30000| Thu Jun 14 01:44:17 [conn10] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true } ntoreturn:1 keyUpdates:0 locks(micros) W:78 reslen:51 0ms
m30000| Thu Jun 14 01:44:17 [conn4] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30000| Thu Jun 14 01:44:17 [conn4] run command admin.$cmd { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30000| Thu Jun 14 01:44:17 [conn4] command: { writebacklisten: ObjectId('4fd97a310955c08e55c3f859') }
m30000| Thu Jun 14 01:44:17 [conn10] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:17 [conn10] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:17 [conn10] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:78 reslen:67 0ms
m30999| Thu Jun 14 01:44:17 [conn] couldn't find database [foo] in config db
m30999| Thu Jun 14 01:44:17 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:44:17 [conn] put [foo] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:44:17 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:17 [conn] connected connection!
m30999| Thu Jun 14 01:44:17 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a310955c08e55c3f859
m30999| Thu Jun 14 01:44:17 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:44:17 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:17 [conn] connected connection!
m30999| Thu Jun 14 01:44:17 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a310955c08e55c3f859
m30999| Thu Jun 14 01:44:17 [conn] initializing shard connection to localhost:30001
m30001| Thu Jun 14 01:44:17 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Thu Jun 14 01:44:17 [FileAllocator] creating directory /data/db/test1/_tmp
m30001| Thu Jun 14 01:44:17 [FileAllocator] flushing directory /data/db/test1
m30999| Thu Jun 14 01:44:17 BackgroundJob starting: WriteBackListener-localhost:30001
m30001| Thu Jun 14 01:44:17 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:18 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.378 secs
m30001| Thu Jun 14 01:44:18 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Thu Jun 14 01:44:18 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:18 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.31 secs
m30001| Thu Jun 14 01:44:18 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Thu Jun 14 01:44:18 [conn3] allocExtent foo.coll0 size 2304 0
m30001| Thu Jun 14 01:44:18 [conn3] adding _id index for collection foo.coll0
m30001| Thu Jun 14 01:44:18 [conn3] allocExtent foo.system.indexes size 3584 0
m30001| Thu Jun 14 01:44:18 [conn3] New namespace: foo.system.indexes
m30001| Thu Jun 14 01:44:18 [conn3] allocExtent foo.system.namespaces size 2048 0
m30001| Thu Jun 14 01:44:18 [conn3] New namespace: foo.system.namespaces
m30001| Thu Jun 14 01:44:18 [conn3] build index foo.coll0 { _id: 1 }
m30001| mem info: before index start vsize: 143 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:18 [conn3] external sort root: /data/db/test1/_tmp/esort.1339652658.0/
m30001| mem info: before final sort vsize: 143 resident: 32 mapped: 32
m30001| mem info: after final sort vsize: 143 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:18 [conn3] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:18 [conn3] allocExtent foo.coll0.$_id_ size 36864 0
m30001| Thu Jun 14 01:44:18 [conn3] New namespace: foo.coll0.$_id_
m30001| Thu Jun 14 01:44:18 [conn3] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:18 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:18 [conn3] New namespace: foo.coll0
m30001| Thu Jun 14 01:44:18 [conn3] insert foo.coll0 keyUpdates:0 locks(micros) W:258 w:703884 703ms
m30001| Thu Jun 14 01:44:18 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:18 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:18 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:258 w:703884 reslen:67 0ms
m30001| Thu Jun 14 01:44:18 [initandlisten] connection accepted from 127.0.0.1:48942 #4 (4 connections now open)
m30001| Thu Jun 14 01:44:18 [conn4] runQuery called admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:44:18 [conn4] run command admin.$cmd { serverStatus: 1 }
m30001| Thu Jun 14 01:44:18 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:17 reslen:1707 0ms
m30000| Thu Jun 14 01:44:18 [conn3] runQuery called config.databases { _id: "bar" }
m30000| Thu Jun 14 01:44:18 [conn3] query config.databases query: { _id: "bar" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1855 w:4394 reslen:20 0ms
m30000| Thu Jun 14 01:44:18 [conn5] runQuery called config.databases { _id: /^bar$/i }
m30000| Thu Jun 14 01:44:18 [conn5] query config.databases query: { _id: /^bar$/i } ntoreturn:1 keyUpdates:0 locks(micros) r:427 w:976 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:44:18 [conn5] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:18 [conn5] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:18 [conn5] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:444 w:976 reslen:1713 0ms
m30000| Thu Jun 14 01:44:18 [conn5] runQuery called admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:18 [conn5] run command admin.$cmd { serverStatus: 1 }
m30000| Thu Jun 14 01:44:18 [conn5] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:460 w:976 reslen:1713 0ms
m30000| Thu Jun 14 01:44:18 [conn3] update config.databases query: { _id: "bar" } update: { _id: "bar", partitioned: false, primary: "shard0000" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1855 w:4470 0ms
m30000| Thu Jun 14 01:44:18 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:18 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:18 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4470 reslen:85 0ms
m30000| Thu Jun 14 01:44:18 [conn10] opening db: bar
m30000| Thu Jun 14 01:44:18 [FileAllocator] allocating new datafile /data/db/test0/bar.ns, filling with zeroes...
m30999| Thu Jun 14 01:44:18 [conn] couldn't find database [bar] in config db
m30999| Thu Jun 14 01:44:18 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:18 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:18 [conn] connected connection!
m30999| Thu Jun 14 01:44:18 [conn] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:44:18 [conn] put [bar] on: shard0000:localhost:30000
m30000| Thu Jun 14 01:44:18 [FileAllocator] flushing directory /data/db/test0
m30001| Thu Jun 14 01:44:18 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:19 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.902 secs
m30000| Thu Jun 14 01:44:19 [FileAllocator] done allocating datafile /data/db/test0/bar.ns, size: 16MB, took 0.898 secs
m30000| Thu Jun 14 01:44:19 [FileAllocator] allocating new datafile /data/db/test0/bar.0, filling with zeroes...
m30000| Thu Jun 14 01:44:19 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:44:19 [FileAllocator] done allocating datafile /data/db/test0/bar.0, size: 16MB, took 0.286 secs
m30000| Thu Jun 14 01:44:19 [FileAllocator] allocating new datafile /data/db/test0/bar.1, filling with zeroes...
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll0 size 2304 0
m30000| Thu Jun 14 01:44:19 [conn10] adding _id index for collection bar.coll0
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.system.indexes size 3584 0
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.system.indexes
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.system.namespaces size 2048 0
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.system.namespaces
m30000| Thu Jun 14 01:44:19 [conn10] build index bar.coll0 { _id: 1 }
m30000| mem info: before index start vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort root: /data/db/test0/_tmp/esort.1339652659.13/
m30000| mem info: before final sort vsize: 182 resident: 49 mapped: 64
m30000| mem info: after final sort vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll0.$_id_ size 36864 0
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll0.$_id_
m30000| Thu Jun 14 01:44:19 [conn10] done building bottom layer, going to commit
m30000| Thu Jun 14 01:44:19 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll0
m30000| Thu Jun 14 01:44:19 [conn10] insert bar.coll0 keyUpdates:0 locks(micros) W:78 w:1194324 1194ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:78 w:1194324 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll1 size 2304 0
m30000| Thu Jun 14 01:44:19 [conn10] adding _id index for collection bar.coll1
m30000| Thu Jun 14 01:44:19 [conn10] build index bar.coll1 { _id: 1 }
m30000| mem info: before index start vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort root: /data/db/test0/_tmp/esort.1339652659.14/
m30000| mem info: before final sort vsize: 182 resident: 49 mapped: 64
m30000| mem info: after final sort vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll1.$_id_ size 36864 0
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll1.$_id_
m30000| Thu Jun 14 01:44:19 [conn10] done building bottom layer, going to commit
m30000| Thu Jun 14 01:44:19 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll1
m30000| Thu Jun 14 01:44:19 [conn10] insert bar.coll1 keyUpdates:0 locks(micros) W:78 w:1195191 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:78 w:1195191 reslen:67 0ms
m30001| Thu Jun 14 01:44:19 [conn3] allocExtent foo.coll1 size 2304 0
m30001| Thu Jun 14 01:44:19 [conn3] adding _id index for collection foo.coll1
m30001| Thu Jun 14 01:44:19 [conn3] build index foo.coll1 { _id: 1 }
m30001| mem info: before index start vsize: 144 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:19 [conn3] external sort root: /data/db/test1/_tmp/esort.1339652659.1/
m30001| mem info: before final sort vsize: 144 resident: 32 mapped: 32
m30001| mem info: after final sort vsize: 144 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:19 [conn3] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:19 [conn3] allocExtent foo.coll1.$_id_ size 36864 0
m30001| Thu Jun 14 01:44:19 [conn3] New namespace: foo.coll1.$_id_
m30001| Thu Jun 14 01:44:19 [conn3] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:19 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:19 [conn3] New namespace: foo.coll1
m30001| Thu Jun 14 01:44:19 [conn3] insert foo.coll1 keyUpdates:0 locks(micros) W:258 w:704815 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:258 w:704815 reslen:67 0ms
m30001| Thu Jun 14 01:44:19 [conn3] allocExtent foo.coll2 size 2304 0
m30001| Thu Jun 14 01:44:19 [conn3] adding _id index for collection foo.coll2
m30001| Thu Jun 14 01:44:19 [conn3] build index foo.coll2 { _id: 1 }
m30001| mem info: before index start vsize: 144 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:19 [conn3] external sort root: /data/db/test1/_tmp/esort.1339652659.2/
m30001| mem info: before final sort vsize: 144 resident: 32 mapped: 32
m30001| mem info: after final sort vsize: 144 resident: 32 mapped: 32
m30001| Thu Jun 14 01:44:19 [conn3] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:19 [conn3] allocExtent foo.coll2.$_id_ size 36864 0
m30001| Thu Jun 14 01:44:19 [conn3] New namespace: foo.coll2.$_id_
m30001| Thu Jun 14 01:44:19 [conn3] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:19 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:19 [conn3] New namespace: foo.coll2
m30001| Thu Jun 14 01:44:19 [conn3] insert foo.coll2 keyUpdates:0 locks(micros) W:258 w:705648 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:258 w:705648 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll2 size 2304 0
m30000| Thu Jun 14 01:44:19 [conn10] adding _id index for collection bar.coll2
m30000| Thu Jun 14 01:44:19 [conn10] build index bar.coll2 { _id: 1 }
m30000| mem info: before index start vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort root: /data/db/test0/_tmp/esort.1339652659.15/
m30000| mem info: before final sort vsize: 182 resident: 49 mapped: 64
m30000| mem info: after final sort vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn10] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] allocExtent bar.coll2.$_id_ size 36864 0
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll2.$_id_
m30000| Thu Jun 14 01:44:19 [conn10] done building bottom layer, going to commit
m30000| Thu Jun 14 01:44:19 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:19 [conn10] New namespace: bar.coll2
m30000| Thu Jun 14 01:44:19 [conn10] insert bar.coll2 keyUpdates:0 locks(micros) W:78 w:1196023 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { getlasterror: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { getlasterror: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:78 w:1196023 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4470 reslen:85 0ms
m30999| Thu Jun 14 01:44:19 [conn] enabling sharding on: foo
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "foo" } update: { _id: "foo", partitioned: true, primary: "shard0001" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:1855 w:4579 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4579 reslen:85 0ms
{ "ok" : 1 }
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4579 reslen:85 0ms
m30999| Thu Jun 14 01:44:19 [conn] enabling sharding on: bar
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "bar" } update: { _id: "bar", partitioned: true, primary: "shard0000" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:1855 w:4647 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4647 reslen:85 0ms
{ "ok" : 1 }
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4647 reslen:85 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.system.indexes { ns: "foo.coll1" }
m30001| Thu Jun 14 01:44:19 [conn4] query foo.system.indexes query: { ns: "foo.coll1" } ntoreturn:0 keyUpdates:0 locks(micros) r:262 nreturned:1 reslen:84 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.system.namespaces { name: "foo.coll1" }
m30001| Thu Jun 14 01:44:19 [conn4] query foo.system.namespaces query: { name: "foo.coll1" } ntoreturn:1 keyUpdates:0 locks(micros) r:325 nreturned:1 reslen:45 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.coll1", keyPattern: { _id: 1.0 } }
m30001| Thu Jun 14 01:44:19 [conn4] run command admin.$cmd { checkShardingIndex: "foo.coll1", keyPattern: { _id: 1.0 } }
m30001| Thu Jun 14 01:44:19 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.coll1", keyPattern: { _id: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) r:325 reslen:46 0ms
m30999| Thu Jun 14 01:44:19 [conn] CMD: shardcollection: { shardCollection: "foo.coll1", key: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4647 reslen:85 0ms
m30999| Thu Jun 14 01:44:19 [conn] enable sharding on: foo.coll1 with shard key: { _id: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.$cmd { count: "coll1", query: {} }
m30001| Thu Jun 14 01:44:19 [conn4] run command foo.$cmd { count: "coll1", query: {} }
m30001| Thu Jun 14 01:44:19 [conn4] command foo.$cmd command: { count: "coll1", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:350 reslen:48 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called admin.$cmd { splitVector: "foo.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30001| Thu Jun 14 01:44:19 [conn4] run command admin.$cmd { splitVector: "foo.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30001| Thu Jun 14 01:44:19 [conn4] command admin.$cmd command: { splitVector: "foo.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 } ntoreturn:1 keyUpdates:0 locks(micros) r:376 reslen:53 0ms
m30999| Thu Jun 14 01:44:19 [conn] going to create 1 chunk(s) for: foo.coll1 using new epoch 4fd97a330955c08e55c3f85b
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.$cmd { count: "chunks", query: { ns: "foo.coll1" } }
m30000| Thu Jun 14 01:44:19 [conn5] run command config.$cmd { count: "chunks", query: { ns: "foo.coll1" } }
m30000| Thu Jun 14 01:44:19 [conn5] command config.$cmd command: { count: "chunks", query: { ns: "foo.coll1" } } ntoreturn:1 keyUpdates:0 locks(micros) r:639 w:976 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [conn5] update config.chunks query: { _id: "foo.coll1-_id_MinKey" } update: { _id: "foo.coll1-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85b'), ns: "foo.coll1", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:639 w:1092 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:639 w:1092 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:936 w:1092 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "foo" } update: { _id: "foo", partitioned: true, primary: "shard0001" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:1855 w:4681 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:4681 reslen:85 0ms
m30001| Thu Jun 14 01:44:19 [conn4] index already exists with diff name _id_1 { _id: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) r:376 w:41 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:266 w:705648 reslen:175 0ms
m30000| Thu Jun 14 01:44:19 [conn3] allocExtent config.collections size 6912 0
m30000| Thu Jun 14 01:44:19 [conn3] adding _id index for collection config.collections
m30000| Thu Jun 14 01:44:19 [conn3] build index config.collections { _id: 1 }
m30000| mem info: before index start vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn3] external sort root: /data/db/test0/_tmp/esort.1339652659.16/
m30000| mem info: before final sort vsize: 182 resident: 49 mapped: 64
m30000| mem info: after final sort vsize: 182 resident: 49 mapped: 64
m30000| Thu Jun 14 01:44:19 [conn3] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:44:19 [conn3] allocExtent config.collections.$_id_ size 36864 0
m30000| Thu Jun 14 01:44:19 [conn3] New namespace: config.collections.$_id_
m30000| Thu Jun 14 01:44:19 [conn3] done building bottom layer, going to commit
m30000| Thu Jun 14 01:44:19 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:19 [conn3] New namespace: config.collections
m30000| Thu Jun 14 01:44:19 [conn3] update config.collections query: { _id: "foo.coll1" } update: { _id: "foo.coll1", lastmod: new Date(1339652659), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85b') } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1855 w:5796 1ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1855 w:5796 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.chunks { query: { ns: "foo.coll1" }, orderby: { lastmod: -1 } }
m30000| Thu Jun 14 01:44:19 [conn3] query config.chunks query: { query: { ns: "foo.coll1" }, orderby: { lastmod: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:2027 w:5796 nreturned:1 reslen:167 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] trying to set shard version of 1|0||4fd97a330955c08e55c3f85b for 'foo.coll1'
m30001| Thu Jun 14 01:44:19 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:44:19 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a330955c08e55c3f85b for 'foo.coll1'
m30001| Thu Jun 14 01:44:19 [conn3] creating new connection to:localhost:30000
m30001| Thu Jun 14 01:44:19 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:19 [conn3] connected connection!
m30001| Thu Jun 14 01:44:19 [conn3] loaded 1 chunks into new chunk manager for foo.coll1 with version 1|0||4fd97a330955c08e55c3f85b
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:273 w:705648 reslen:86 1ms
m30000| Thu Jun 14 01:44:19 [initandlisten] connection accepted from 127.0.0.1:60368 #11 (11 connections now open)
m30000| Thu Jun 14 01:44:19 [conn11] runQuery called config.collections { _id: "foo.coll1" }
m30000| Thu Jun 14 01:44:19 [conn11] query config.collections query: { _id: "foo.coll1" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:19 reslen:129 0ms
m30000| Thu Jun 14 01:44:19 [conn11] runQuery called config.chunks { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } }, { ns: "foo.coll1", shard: "shard0001", lastmod: { $gt: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn11] query config.chunks query: { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } }, { ns: "foo.coll1", shard: "shard0001", lastmod: { $gt: Timestamp 0|0 } } ] } ntoreturn:0 keyUpdates:0 locks(micros) r:278 nreturned:1 reslen:167 0ms
{ "collectionsharded" : "foo.coll1", "ok" : 1 }
{ "collectionsharded" : "foo.coll2", "ok" : 1 }
{ "collectionsharded" : "bar.coll1", "ok" : 1 }
{ "collectionsharded" : "bar.coll2", "ok" : 1 }
----
Setup collections for moveprimary test...
----
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }
foo.coll1 chunks:
shard0001 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 0)
foo.coll2 chunks:
shard0001 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0001 Timestamp(1000, 0)
{ "_id" : "bar", "partitioned" : true, "primary" : "shard0000" }
bar.coll1 chunks:
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(1000, 0)
bar.coll2 chunks:
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(1000, 0)
----
Running movePrimary for foo through mongosA ...
----
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2027 w:5796 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2027 w:5796 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.$cmd { count: "chunks", query: { ns: "foo.coll2" } }
m30000| Thu Jun 14 01:44:19 [conn5] run command config.$cmd { count: "chunks", query: { ns: "foo.coll2" } }
m30000| Thu Jun 14 01:44:19 [conn5] command config.$cmd command: { count: "chunks", query: { ns: "foo.coll2" } } ntoreturn:1 keyUpdates:0 locks(micros) r:1023 w:1092 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [conn5] update config.chunks query: { _id: "foo.coll2-_id_MinKey" } update: { _id: "foo.coll2-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85c'), ns: "foo.coll2", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1023 w:1180 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1023 w:1180 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:1213 w:1180 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "foo" } update: { _id: "foo", partitioned: true, primary: "shard0001" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:2027 w:5835 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2027 w:5835 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.collections query: { _id: "foo.coll2" } update: { _id: "foo.coll2", lastmod: new Date(1339652659), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85c') } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:2027 w:5875 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2027 w:5875 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.chunks { query: { ns: "foo.coll2" }, orderby: { lastmod: -1 } }
m30000| Thu Jun 14 01:44:19 [conn3] query config.chunks query: { query: { ns: "foo.coll2" }, orderby: { lastmod: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:2107 w:5875 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn11] runQuery called config.collections { _id: "foo.coll2" }
m30000| Thu Jun 14 01:44:19 [conn11] query config.collections query: { _id: "foo.coll2" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:294 reslen:129 0ms
m30000| Thu Jun 14 01:44:19 [conn11] runQuery called config.chunks { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } }, { ns: "foo.coll2", shard: "shard0001", lastmod: { $gt: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn11] query config.chunks query: { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } }, { ns: "foo.coll2", shard: "shard0001", lastmod: { $gt: Timestamp 0|0 } } ] } ntoreturn:0 keyUpdates:0 locks(micros) r:503 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2107 w:5875 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.system.indexes { ns: "bar.coll1" }
m30000| Thu Jun 14 01:44:19 [conn5] query bar.system.indexes query: { ns: "bar.coll1" } ntoreturn:0 keyUpdates:0 locks(micros) r:1282 w:1180 nreturned:1 reslen:84 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.system.namespaces { name: "bar.coll1" }
m30000| Thu Jun 14 01:44:19 [conn5] query bar.system.namespaces query: { name: "bar.coll1" } ntoreturn:1 keyUpdates:0 locks(micros) r:1335 w:1180 nreturned:1 reslen:45 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { checkShardingIndex: "bar.coll1", keyPattern: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { checkShardingIndex: "bar.coll1", keyPattern: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { checkShardingIndex: "bar.coll1", keyPattern: { _id: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) r:1335 w:1180 reslen:46 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2107 w:5875 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.$cmd { count: "coll1", query: {} }
m30000| Thu Jun 14 01:44:19 [conn5] run command bar.$cmd { count: "coll1", query: {} }
m30000| Thu Jun 14 01:44:19 [conn5] command bar.$cmd command: { count: "coll1", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1349 w:1180 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { splitVector: "bar.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { splitVector: "bar.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { splitVector: "bar.coll1", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 } ntoreturn:1 keyUpdates:0 locks(micros) r:1368 w:1180 reslen:53 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.$cmd { count: "chunks", query: { ns: "bar.coll1" } }
m30000| Thu Jun 14 01:44:19 [conn5] run command config.$cmd { count: "chunks", query: { ns: "bar.coll1" } }
m30000| Thu Jun 14 01:44:19 [conn5] command config.$cmd command: { count: "chunks", query: { ns: "bar.coll1" } } ntoreturn:1 keyUpdates:0 locks(micros) r:1433 w:1180 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [initandlisten] connection accepted from 127.0.0.1:60369 #12 (12 connections now open)
m30000| Thu Jun 14 01:44:19 [conn5] update config.chunks query: { _id: "bar.coll1-_id_MinKey" } update: { _id: "bar.coll1-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85d'), ns: "bar.coll1", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1433 w:1262 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1433 w:1262 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:1598 w:1262 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "bar" } update: { _id: "bar", partitioned: true, primary: "shard0000" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:2107 w:5907 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2107 w:5907 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.collections query: { _id: "bar.coll1" } update: { _id: "bar.coll1", lastmod: new Date(1339652659), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85d') } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:2107 w:5949 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2107 w:5949 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:85 w:1196023 reslen:175 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.chunks { query: { ns: "bar.coll1" }, orderby: { lastmod: -1 } }
m30000| Thu Jun 14 01:44:19 [conn3] query config.chunks query: { query: { ns: "bar.coll1" }, orderby: { lastmod: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:2186 w:5949 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn12] index already exists with diff name _id_1 { _id: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn12] insert bar.system.indexes keyUpdates:0 locks(micros) w:47 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] trying to set shard version of 1|0||4fd97a330955c08e55c3f85d for 'bar.coll1'
m30000| Thu Jun 14 01:44:19 [conn10] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:44:19 [conn10] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a330955c08e55c3f85d for 'bar.coll1'
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.collections { _id: "bar.coll1" }
m30000| Thu Jun 14 01:44:19 [conn10] query config.collections query: { _id: "bar.coll1" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:17 reslen:129 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } }, { ns: "bar.coll1", shard: "shard0000", lastmod: { $gt: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } }, { ns: "bar.coll1", shard: "shard0000", lastmod: { $gt: Timestamp 0|0 } } ] } ntoreturn:0 keyUpdates:0 locks(micros) r:244 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] loaded 1 chunks into new chunk manager for bar.coll1 with version 1|0||4fd97a330955c08e55c3f85d
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:92 w:1196023 reslen:86 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2186 w:5949 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.system.indexes { ns: "bar.coll2" }
m30000| Thu Jun 14 01:44:19 [conn5] query bar.system.indexes query: { ns: "bar.coll2" } ntoreturn:0 keyUpdates:0 locks(micros) r:1657 w:1262 nreturned:1 reslen:84 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.system.namespaces { name: "bar.coll2" }
m30000| Thu Jun 14 01:44:19 [conn5] query bar.system.namespaces query: { name: "bar.coll2" } ntoreturn:1 keyUpdates:0 locks(micros) r:1708 w:1262 nreturned:1 reslen:45 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { checkShardingIndex: "bar.coll2", keyPattern: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { checkShardingIndex: "bar.coll2", keyPattern: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { checkShardingIndex: "bar.coll2", keyPattern: { _id: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) r:1708 w:1262 reslen:46 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2186 w:5949 reslen:67 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called bar.$cmd { count: "coll2", query: {} }
m30000| Thu Jun 14 01:44:19 [conn5] run command bar.$cmd { count: "coll2", query: {} }
m30000| Thu Jun 14 01:44:19 [conn5] command bar.$cmd command: { count: "coll2", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1722 w:1262 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { splitVector: "bar.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { splitVector: "bar.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { splitVector: "bar.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 } ntoreturn:1 keyUpdates:0 locks(micros) r:1737 w:1262 reslen:53 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.$cmd { count: "chunks", query: { ns: "bar.coll2" } }
m30000| Thu Jun 14 01:44:19 [conn5] run command config.$cmd { count: "chunks", query: { ns: "bar.coll2" } }
m30000| Thu Jun 14 01:44:19 [conn5] command config.$cmd command: { count: "chunks", query: { ns: "bar.coll2" } } ntoreturn:1 keyUpdates:0 locks(micros) r:1803 w:1262 reslen:48 0ms
m30000| Thu Jun 14 01:44:19 [conn5] update config.chunks query: { _id: "bar.coll2-_id_MinKey" } update: { _id: "bar.coll2-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85e'), ns: "bar.coll2", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000" } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:1803 w:1335 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1803 w:1335 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:1970 w:1335 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.databases query: { _id: "bar" } update: { _id: "bar", partitioned: true, primary: "shard0000" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:2186 w:5980 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2186 w:5980 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn3] update config.collections query: { _id: "bar.coll2" } update: { _id: "bar.coll2", lastmod: new Date(1339652659), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('4fd97a330955c08e55c3f85e') } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) r:2186 w:6023 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2186 w:6023 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:97 w:1196023 reslen:175 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.chunks { query: { ns: "bar.coll2" }, orderby: { lastmod: -1 } }
m30000| Thu Jun 14 01:44:19 [conn3] query config.chunks query: { query: { ns: "bar.coll2" }, orderby: { lastmod: -1 } } ntoreturn:1 keyUpdates:0 locks(micros) r:2265 w:6023 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] run command admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn10] trying to set shard version of 1|0||4fd97a330955c08e55c3f85e for 'bar.coll2'
m30000| Thu Jun 14 01:44:19 [conn10] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:44:19 [conn10] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a330955c08e55c3f85e for 'bar.coll2'
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.collections { _id: "bar.coll2" }
m30000| Thu Jun 14 01:44:19 [conn10] query config.collections query: { _id: "bar.coll2" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:15 reslen:129 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } }, { ns: "bar.coll2", shard: "shard0000", lastmod: { $gt: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } }, { ns: "bar.coll2", shard: "shard0000", lastmod: { $gt: Timestamp 0|0 } } ] } ntoreturn:0 keyUpdates:0 locks(micros) r:210 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] loaded 1 chunks into new chunk manager for bar.coll2 with version 1|0||4fd97a330955c08e55c3f85e
m30000| Thu Jun 14 01:44:19 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:104 w:1196023 reslen:86 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.databases { _id: "foo" }
m30000| Thu Jun 14 01:44:19 [conn10] query config.databases query: { _id: "foo" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) W:104 r:16 w:1196023 reslen:75 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.databases { _id: "bar" }
m30000| Thu Jun 14 01:44:19 [conn10] query config.databases query: { _id: "bar" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) W:104 r:30 w:1196023 reslen:75 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.shards {}
m30000| Thu Jun 14 01:44:19 [conn10] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:64 w:1196023 nreturned:2 reslen:120 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.version {}
m30000| Thu Jun 14 01:44:19 [conn10] query config.version ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:86 w:1196023 nreturned:1 reslen:47 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.version {}
m30000| Thu Jun 14 01:44:19 [conn10] query config.version ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:106 w:1196023 nreturned:1 reslen:47 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.shards { query: {}, orderby: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.shards query: { query: {}, orderby: { _id: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:188 w:1196023 nreturned:2 reslen:120 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.databases { query: {}, orderby: { name: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.databases query: { query: {}, orderby: { name: 1.0 } } ntoreturn:0 scanAndOrder:1 keyUpdates:0 locks(micros) W:104 r:272 w:1196023 nreturned:3 reslen:184 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.collections { query: { _id: /^foo\./ }, orderby: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.collections query: { query: { _id: /^foo\./ }, orderby: { _id: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:422 w:1196023 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.$cmd { group: { cond: { ns: "foo.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] run command config.$cmd { group: { cond: { ns: "foo.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.system.js {}
m30000| Thu Jun 14 01:44:19 [conn10] query config.system.js ntoreturn:0 keyUpdates:0 locks(micros) r:87 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:44:19 [conn10] command config.$cmd command: { group: { cond: { ns: "foo.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:23220 w:1196023 reslen:121 22ms
m30000| Thu Jun 14 01:44:19 [conn12] index already exists with diff name _id_1 { _id: 1.0 }
m30000| Thu Jun 14 01:44:19 [conn12] insert bar.system.indexes keyUpdates:0 locks(micros) w:101 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { query: { ns: "foo.coll1" }, orderby: { min: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { query: { ns: "foo.coll1" }, orderby: { min: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:23385 w:1196023 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.$cmd { group: { cond: { ns: "foo.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] run command config.$cmd { group: { cond: { ns: "foo.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] command config.$cmd command: { group: { cond: { ns: "foo.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:24112 w:1196023 reslen:121 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { query: { ns: "foo.coll2" }, orderby: { min: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { query: { ns: "foo.coll2" }, orderby: { min: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:24196 w:1196023 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.collections { query: { _id: /^bar\./ }, orderby: { _id: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.collections query: { query: { _id: /^bar\./ }, orderby: { _id: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:24331 w:1196023 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.$cmd { group: { cond: { ns: "bar.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] run command config.$cmd { group: { cond: { ns: "bar.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] command config.$cmd command: { group: { cond: { ns: "bar.coll1" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25012 w:1196023 reslen:121 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { query: { ns: "bar.coll1" }, orderby: { min: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { query: { ns: "bar.coll1" }, orderby: { min: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:25093 w:1196023 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.$cmd { group: { cond: { ns: "bar.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] run command config.$cmd { group: { cond: { ns: "bar.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } }
m30000| Thu Jun 14 01:44:19 [conn10] command config.$cmd command: { group: { cond: { ns: "bar.coll2" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function (doc, out) {
m30000| out.nChunks++;
m30000| } } } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25763 w:1196023 reslen:121 0ms
m30000| Thu Jun 14 01:44:19 [conn10] runQuery called config.chunks { query: { ns: "bar.coll2" }, orderby: { min: 1.0 } }
m30000| Thu Jun 14 01:44:19 [conn10] query config.chunks query: { query: { ns: "bar.coll2" }, orderby: { min: 1.0 } } ntoreturn:0 keyUpdates:0 locks(micros) W:104 r:25844 w:1196023 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.databases { _id: "foo" }
m30000| Thu Jun 14 01:44:19 [conn3] query config.databases query: { _id: "foo" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:2288 w:6023 reslen:75 0ms
m30000| Thu Jun 14 01:44:19 [conn3] runQuery called config.collections { _id: /^foo\./ }
m30000| Thu Jun 14 01:44:19 [conn3] query config.collections query: { _id: /^foo\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:2423 w:6023 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:2158 w:1335 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.chunks { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:19 [conn5] query config.chunks query: { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:2312 w:1335 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.shards { host: "localhost:30000" }
m30000| Thu Jun 14 01:44:19 [conn5] query config.shards query: { host: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2380 w:1335 nreturned:1 reslen:70 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.locks { _id: "foo-movePrimary" }
m30000| Thu Jun 14 01:44:19 [conn5] query config.locks query: { _id: "foo-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:2394 w:1335 reslen:20 0ms
m30000| Thu Jun 14 01:44:19 [conn5] insert config.locks keyUpdates:0 locks(micros) r:2394 w:1403 0ms
m30000| Thu Jun 14 01:44:19 [conn5] running multiple plans
m30000| Thu Jun 14 01:44:19 [conn5] update config.locks query: { _id: "foo-movePrimary", state: 0 } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30999:1339652657:1804289383:conn:1365180540", process: "domU-12-31-39-01-70-B4:30999:1339652657:1804289383", when: new Date(1339652659822), why: "Moving primary shard of foo", ts: ObjectId('4fd97a330955c08e55c3f85f') } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) r:2394 w:1629 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2394 w:1629 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.locks { _id: "foo-movePrimary" }
m30000| Thu Jun 14 01:44:19 [conn5] query config.locks query: { _id: "foo-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:2408 w:1629 reslen:268 0ms
m30000| Thu Jun 14 01:44:19 [conn5] update config.locks query: { _id: "foo-movePrimary" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30999:1339652657:1804289383:conn:1365180540", process: "domU-12-31-39-01-70-B4:30999:1339652657:1804289383", when: new Date(1339652659822), why: "Moving primary shard of foo", ts: ObjectId('4fd97a330955c08e55c3f85f') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:2408 w:1676 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:19 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2408 w:1676 reslen:85 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called config.locks { _id: "foo-movePrimary" }
m30000| Thu Jun 14 01:44:19 [conn5] query config.locks query: { _id: "foo-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:2421 w:1676 reslen:268 0ms
m30000| Thu Jun 14 01:44:19 [conn5] runQuery called foo.$cmd { clone: "localhost:30001", collsToIgnore: [ "foo.coll1", "foo.coll2" ] }
m30000| Thu Jun 14 01:44:19 [conn5] run command foo.$cmd { clone: "localhost:30001", collsToIgnore: [ "foo.coll1", "foo.coll2" ] }
m30000| Thu Jun 14 01:44:19 [conn5] opening db: foo
m30000| Thu Jun 14 01:44:19 [conn5] creating new connection to:localhost:30001
m30000| Thu Jun 14 01:44:19 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:44:19 [conn5] connected connection!
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.system.indexes" }
m30000| Thu Jun 14 01:44:19 [conn5] not cloning because system collection
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll0.$_id_" }
m30000| Thu Jun 14 01:44:19 [conn5] not cloning because has $
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll0" }
m30000| Thu Jun 14 01:44:19 [conn5] not ignoring collection foo.coll0
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll1.$_id_" }
m30000| Thu Jun 14 01:44:19 [conn5] not cloning because has $
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll1" }
m30000| Thu Jun 14 01:44:19 [conn5] ignoring collection foo.coll1
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll2.$_id_" }
m30000| Thu Jun 14 01:44:19 [conn5] not cloning because has $
m30000| Thu Jun 14 01:44:19 [conn5] cloner got { name: "foo.coll2" }
m30000| Thu Jun 14 01:44:19 [conn5] ignoring collection foo.coll2
m30000| Thu Jun 14 01:44:19 [conn5] really will clone: { name: "foo.coll0" }
m30000| Thu Jun 14 01:44:19 [conn5] create collection foo.coll0 {}
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.system.indexes { ns: "foo.coll2" }
m30001| Thu Jun 14 01:44:19 [conn4] query foo.system.indexes query: { ns: "foo.coll2" } ntoreturn:0 keyUpdates:0 locks(micros) r:480 w:41 nreturned:1 reslen:84 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.system.namespaces { name: "foo.coll2" }
m30001| Thu Jun 14 01:44:19 [conn4] query foo.system.namespaces query: { name: "foo.coll2" } ntoreturn:1 keyUpdates:0 locks(micros) r:533 w:41 nreturned:1 reslen:45 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called admin.$cmd { checkShardingIndex: "foo.coll2", keyPattern: { _id: 1.0 } }
m30001| Thu Jun 14 01:44:19 [conn4] run command admin.$cmd { checkShardingIndex: "foo.coll2", keyPattern: { _id: 1.0 } }
m30001| Thu Jun 14 01:44:19 [conn4] command admin.$cmd command: { checkShardingIndex: "foo.coll2", keyPattern: { _id: 1.0 } } ntoreturn:1 keyUpdates:0 locks(micros) r:533 w:41 reslen:46 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called foo.$cmd { count: "coll2", query: {} }
m30001| Thu Jun 14 01:44:19 [conn4] run command foo.$cmd { count: "coll2", query: {} }
m30001| Thu Jun 14 01:44:19 [conn4] command foo.$cmd command: { count: "coll2", query: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:552 w:41 reslen:48 0ms
m30001| Thu Jun 14 01:44:19 [conn4] runQuery called admin.$cmd { splitVector: "foo.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30001| Thu Jun 14 01:44:19 [conn4] run command admin.$cmd { splitVector: "foo.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 }
m30001| Thu Jun 14 01:44:19 [conn4] command admin.$cmd command: { splitVector: "foo.coll2", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, maxSplitPoints: 0, maxChunkObjects: 0 } ntoreturn:1 keyUpdates:0 locks(micros) r:571 w:41 reslen:53 0ms
m30001| Thu Jun 14 01:44:19 [conn4] index already exists with diff name _id_1 { _id: 1.0 }
m30001| Thu Jun 14 01:44:19 [conn4] insert foo.system.indexes keyUpdates:0 locks(micros) r:571 w:80 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:279 w:705648 reslen:175 0ms
m30001| Thu Jun 14 01:44:19 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] run command admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:19 [conn3] trying to set shard version of 1|0||4fd97a330955c08e55c3f85c for 'foo.coll2'
m30001| Thu Jun 14 01:44:19 [conn3] no current chunk manager found for this shard, will initialize
m30001| Thu Jun 14 01:44:19 [conn3] verifying cached version 0|0||000000000000000000000000 and new version 1|0||4fd97a330955c08e55c3f85c for 'foo.coll2'
m30001| Thu Jun 14 01:44:19 [conn3] loaded 1 chunks into new chunk manager for foo.coll2 with version 1|0||4fd97a330955c08e55c3f85c
m30001| Thu Jun 14 01:44:19 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:286 w:705648 reslen:86 0ms
m30001| Thu Jun 14 01:44:19 [initandlisten] connection accepted from 127.0.0.1:48945 #5 (5 connections now open)
m30001| Thu Jun 14 01:44:19 [conn5] runQuery called foo.system.namespaces {}
m30001| Thu Jun 14 01:44:19 [conn5] query foo.system.namespaces ntoreturn:0 keyUpdates:0 locks(micros) r:43 nreturned:7 reslen:222 0ms
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for foo.coll1: 0ms sequenceNumber: 2 version: 1|0||4fd97a330955c08e55c3f85b based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0001 localhost:30001 foo.coll1 { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.coll1", need_authoritative: true, errmsg: "first time for collection 'foo.coll1'", ok: 0.0 }
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0001 localhost:30001 foo.coll1 { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] CMD: shardcollection: { shardCollection: "foo.coll2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:44:19 [conn] enable sharding on: foo.coll2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] going to create 1 chunk(s) for: foo.coll2 using new epoch 4fd97a330955c08e55c3f85c
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for foo.coll2: 0ms sequenceNumber: 3 version: 1|0||4fd97a330955c08e55c3f85c based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0001 localhost:30001 foo.coll2 { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.coll2", need_authoritative: true, errmsg: "first time for collection 'foo.coll2'", ok: 0.0 }
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0001 localhost:30001 foo.coll2 { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] CMD: shardcollection: { shardCollection: "bar.coll1", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:44:19 [conn] enable sharding on: bar.coll1 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] going to create 1 chunk(s) for: bar.coll1 using new epoch 4fd97a330955c08e55c3f85d
m30999| Thu Jun 14 01:44:19 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:19 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:19 [conn] connected connection!
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for bar.coll1: 0ms sequenceNumber: 4 version: 1|0||4fd97a330955c08e55c3f85d based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0000 localhost:30000 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "bar.coll1", need_authoritative: true, errmsg: "first time for collection 'bar.coll1'", ok: 0.0 }
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0000 localhost:30000 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] CMD: shardcollection: { shardCollection: "bar.coll2", key: { _id: 1.0 } }
m30999| Thu Jun 14 01:44:19 [conn] enable sharding on: bar.coll2 with shard key: { _id: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] going to create 1 chunk(s) for: bar.coll2 using new epoch 4fd97a330955c08e55c3f85e
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for bar.coll2: 0ms sequenceNumber: 5 version: 1|0||4fd97a330955c08e55c3f85e based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0000 localhost:30000 bar.coll2 { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "bar.coll2", need_authoritative: true, errmsg: "first time for collection 'bar.coll2'", ok: 0.0 }
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion shard0000 localhost:30000 bar.coll2 { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:19 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:19 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for foo.coll1: 0ms sequenceNumber: 6 version: 1|0||4fd97a330955c08e55c3f85b based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] ChunkManager: time to load chunks for foo.coll2: 0ms sequenceNumber: 7 version: 1|0||4fd97a330955c08e55c3f85c based on: (empty)
m30999| Thu Jun 14 01:44:19 [conn] Moving foo primary from: shard0001:localhost:30001 to: shard0000:localhost:30000
m30999| Thu Jun 14 01:44:19 [conn] created new distributed lock for foo-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:19 [conn] inserting initial doc in config.locks for lock foo-movePrimary
m30999| Thu Jun 14 01:44:19 [conn] about to acquire distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339652657:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652657:1804289383:conn:1365180540",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652657:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:19 2012" },
m30999| "why" : "Moving primary shard of foo",
m30999| "ts" : { "$oid" : "4fd97a330955c08e55c3f85f" } }
m30999| { "_id" : "foo-movePrimary",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:44:19 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339652657:1804289383' acquired, ts : 4fd97a330955c08e55c3f85f
m30999| Thu Jun 14 01:44:19 [conn] Coll : foo.coll1 sharded? 1
m30999| Thu Jun 14 01:44:19 [conn] Coll : foo.coll2 sharded? 1
m30000| Thu Jun 14 01:44:19 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:44:20 [FileAllocator] done allocating datafile /data/db/test0/bar.1, size: 32MB, took 0.686 secs
m30000| Thu Jun 14 01:44:20 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:20 [FileAllocator] flushing directory /data/db/test0
m30000| Thu Jun 14 01:44:20 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.295 secs
m30000| Thu Jun 14 01:44:20 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Thu Jun 14 01:44:20 [FileAllocator] flushing directory /data/db/test0
{ "primary " : "shard0000:localhost:30000", "ok" : 1 }
----
Run!
----
m30000| Thu Jun 14 01:44:20 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.283 secs
m30000| Thu Jun 14 01:44:20 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Thu Jun 14 01:44:20 [conn5] allocExtent foo.coll0 size 8192 0
m30000| Thu Jun 14 01:44:20 [conn5] New namespace: foo.coll0
m30000| Thu Jun 14 01:44:20 [conn5] allocExtent foo.system.namespaces size 1536 0
m30000| Thu Jun 14 01:44:20 [conn5] New namespace: foo.system.namespaces
m30000| Thu Jun 14 01:44:20 [conn5] cloning foo.coll0 -> foo.coll0
m30000| Thu Jun 14 01:44:20 [conn5] cloning collection foo.coll0 to foo.coll0 on localhost:30001 with filter { query: {}, $snapshot: true }
m30000| Thu Jun 14 01:44:20 [conn5] adding _id index for collection foo.coll0
m30000| Thu Jun 14 01:44:20 [conn5] allocExtent foo.system.indexes size 3584 0
m30000| Thu Jun 14 01:44:20 [conn5] New namespace: foo.system.indexes
m30000| Thu Jun 14 01:44:20 [conn5] build index foo.coll0 { _id: 1 }
m30000| mem info: before index start vsize: 226 resident: 66 mapped: 96
m30000| Thu Jun 14 01:44:20 [conn5] external sort root: /data/db/test0/_tmp/esort.1339652660.17/
m30000| mem info: before final sort vsize: 226 resident: 66 mapped: 96
m30000| Thu Jun 14 01:44:20 [conn5] not using file. size:35 _compares:0
m30000| mem info: after final sort vsize: 226 resident: 66 mapped: 96
m30000| Thu Jun 14 01:44:20 [conn5] external sort used : 0 files in 0 secs
m30000| Thu Jun 14 01:44:20 [conn5] allocExtent foo.coll0.$_id_ size 36864 0
m30000| Thu Jun 14 01:44:20 [conn5] New namespace: foo.coll0.$_id_
m30000| Thu Jun 14 01:44:20 [conn5] done building bottom layer, going to commit
m30000| Thu Jun 14 01:44:20 [conn5] fastBuildIndex dupsToDrop:0
m30000| Thu Jun 14 01:44:20 [conn5] build index done. scanned 1 total records. 0 secs
m30000| Thu Jun 14 01:44:20 [conn5] cloning collection foo.system.indexes to foo.system.indexes on localhost:30001 with filter { name: { $ne: "_id_" }, ns: { $nin: [ "foo.coll1", "foo.coll2" ] } }
m30000| Thu Jun 14 01:44:21 [conn5] command foo.$cmd command: { clone: "localhost:30001", collsToIgnore: [ "foo.coll1", "foo.coll2" ] } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:24 r:2421 w:1175424 reslen:72 1176ms
m30000| Thu Jun 14 01:44:21 [conn3] update config.databases query: { _id: "foo" } update: { _id: "foo", partitioned: true, primary: "shard0000" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:2423 w:6098 0ms
m30000| Thu Jun 14 01:44:21 [conn3] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn3] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn3] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:2423 w:6098 reslen:85 0ms
m30000| Thu Jun 14 01:44:21 [conn5] running multiple plans
m30000| Thu Jun 14 01:44:21 [conn5] update config.locks query: { _id: "foo-movePrimary", ts: ObjectId('4fd97a330955c08e55c3f85f') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) W:24 r:2421 w:1175615 0ms
m30000| Thu Jun 14 01:44:21 [conn5] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn5] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn5] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:2421 w:1175615 reslen:85 0ms
m30000| Thu Jun 14 01:44:21 [conn10] runQuery called foo.coll0 {}
m30000| Thu Jun 14 01:44:21 [conn10] query foo.coll0 ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25890 w:1196023 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:20 [conn5] runQuery called admin.$cmd { availablequeryoptions: 1 }
m30001| Thu Jun 14 01:44:20 [conn5] run command admin.$cmd { availablequeryoptions: 1 }
m30001| Thu Jun 14 01:44:20 [conn5] command admin.$cmd command: { availablequeryoptions: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:43 reslen:50 0ms
m30001| Thu Jun 14 01:44:20 [conn5] runQuery called foo.coll0 { query: {}, $snapshot: true }
m30001| Thu Jun 14 01:44:20 [conn5] query foo.coll0 query: { query: {}, $snapshot: true } ntoreturn:0 keyUpdates:0 locks(micros) r:255 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:21 [conn5] runQuery called foo.system.indexes { name: { $ne: "_id_" }, ns: { $nin: [ "foo.coll1", "foo.coll2" ] } }
m30001| Thu Jun 14 01:44:21 [conn5] query foo.system.indexes query: { name: { $ne: "_id_" }, ns: { $nin: [ "foo.coll1", "foo.coll2" ] } } ntoreturn:0 keyUpdates:0 locks(micros) r:428 nreturned:0 reslen:20 0ms
m30001| Thu Jun 14 01:44:21 [conn5] SocketException: remote: 127.0.0.1:48945 error: 9001 socket exception [0] server [127.0.0.1:48945]
m30001| Thu Jun 14 01:44:21 [conn5] end connection 127.0.0.1:48945 (4 connections now open)
m30001| Thu Jun 14 01:44:21 [conn4] runQuery called foo.$cmd { drop: "coll0" }
m30001| Thu Jun 14 01:44:21 [conn4] run command foo.$cmd { drop: "coll0" }
m30001| Thu Jun 14 01:44:21 [conn4] CMD: drop foo.coll0
m30001| Thu Jun 14 01:44:21 [conn4] dropCollection: foo.coll0
m30001| Thu Jun 14 01:44:21 [conn4] create collection foo.$freelist {}
m30001| Thu Jun 14 01:44:21 [conn4] allocExtent foo.$freelist size 8192 0
m30001| Thu Jun 14 01:44:21 [conn4] dropIndexes done
m30001| Thu Jun 14 01:44:21 [conn4] command foo.$cmd command: { drop: "coll0" } ntoreturn:1 keyUpdates:0 locks(micros) r:571 w:753 reslen:116 0ms
m30001| Thu Jun 14 01:44:21 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] run command admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:286 w:705648 reslen:86 0ms
m30001| Thu Jun 14 01:44:21 [conn3] runQuery called foo.coll1 {}
m30001| Thu Jun 14 01:44:21 [conn3] query foo.coll1 ntoreturn:1 keyUpdates:0 locks(micros) W:286 r:44 w:705648 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:21 [conn3] runQuery called admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] run command admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn3] command admin.$cmd command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:286 r:44 w:705648 reslen:86 0ms
m30001| Thu Jun 14 01:44:21 [conn3] runQuery called foo.coll2 {}
m30001| Thu Jun 14 01:44:21 [conn3] query foo.coll2 ntoreturn:1 keyUpdates:0 locks(micros) W:286 r:70 w:705648 nreturned:1 reslen:59 0ms
m30999| Thu Jun 14 01:44:21 [conn] movePrimary dropping cloned collection foo.coll0 on localhost:30001
m30999| Thu Jun 14 01:44:21 [conn] distributed lock 'foo-movePrimary/domU-12-31-39-01-70-B4:30999:1339652657:1804289383' unlocked.
m30999| Thu Jun 14 01:44:21 [conn] setShardVersion shard0001 localhost:30001 foo.coll1 { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), ok: 1.0 }
m30999| Thu Jun 14 01:44:21 [conn] setShardVersion shard0001 localhost:30001 foo.coll2 { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0001", shardHost: "localhost:30001" } 0x8ee5ce0
m30999| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), ok: 1.0 }
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.databases { _id: "foo" }
m30000| Thu Jun 14 01:44:21 [conn9] query config.databases query: { _id: "foo" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1084 w:116 reslen:75 0ms
m30998| Thu Jun 14 01:44:21 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30000| Thu Jun 14 01:44:21 [conn7] runQuery called config.shards {}
m30000| Thu Jun 14 01:44:21 [conn7] query config.shards ntoreturn:0 keyUpdates:0 locks(micros) r:188 w:223 nreturned:2 reslen:120 0ms
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.collections { _id: /^foo\./ }
m30000| Thu Jun 14 01:44:21 [conn9] query config.collections query: { _id: /^foo\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:1284 w:116 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:21 [conn7] runQuery called config.chunks { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn7] query config.chunks query: { $or: [ { ns: "foo.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:398 w:223 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for foo.coll1: 0ms sequenceNumber: 2 version: 1|0||4fd97a330955c08e55c3f85b based on: (empty)
m30000| Thu Jun 14 01:44:21 [conn7] runQuery called config.chunks { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn7] query config.chunks query: { $or: [ { ns: "foo.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:560 w:223 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for foo.coll2: 0ms sequenceNumber: 3 version: 1|0||4fd97a330955c08e55c3f85c based on: (empty)
m30998| Thu Jun 14 01:44:21 [conn] creating new connection to:localhost:30000
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:60371 #13 (13 connections now open)
m30998| Thu Jun 14 01:44:21 [conn] connected connection!
m30998| Thu Jun 14 01:44:21 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a311d2308d2e916ca00
m30998| Thu Jun 14 01:44:21 [conn] initializing shard connection to localhost:30000
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30000| Thu Jun 14 01:44:21 [conn13] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30000| Thu Jun 14 01:44:21 [conn13] command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30000| Thu Jun 14 01:44:21 [conn13] entering shard mode for connection
m30000| Thu Jun 14 01:44:21 [conn13] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30998| Thu Jun 14 01:44:21 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: WriteBackListener-localhost:30000
m30000| Thu Jun 14 01:44:21 [conn7] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30000| Thu Jun 14 01:44:21 [conn7] run command admin.$cmd { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30000| Thu Jun 14 01:44:21 [conn7] command: { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:48947 #6 (5 connections now open)
m30998| Thu Jun 14 01:44:21 [conn] connected connection!
m30998| Thu Jun 14 01:44:21 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a311d2308d2e916ca00
m30998| Thu Jun 14 01:44:21 [conn] initializing shard connection to localhost:30001
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30001| Thu Jun 14 01:44:21 [conn6] run command admin.$cmd { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30001| Thu Jun 14 01:44:21 [conn6] command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true }
m30001| Thu Jun 14 01:44:21 [conn6] entering shard mode for connection
m30001| Thu Jun 14 01:44:21 [conn6] command admin.$cmd command: { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('4fd97a311d2308d2e916ca00'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:51 0ms
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called foo.coll0 {}
m30000| Thu Jun 14 01:44:21 [conn13] query foo.coll0 ntoreturn:1 keyUpdates:0 locks(micros) r:52 nreturned:1 reslen:59 0ms
m30998| Thu Jun 14 01:44:21 [conn] resetting shard version of foo.coll1 on localhost:30000, version is zero
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0000 localhost:30000 foo.coll1 { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0f8e88
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] run command admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command admin.$cmd command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:12 r:52 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0001 localhost:30001 foo.coll1 { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } 0xa0f7e18
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] run command admin.$cmd { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command admin.$cmd command: { setShardVersion: "foo.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85b'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called foo.coll1 {}
m30001| Thu Jun 14 01:44:21 [conn6] query foo.coll1 ntoreturn:1 keyUpdates:0 locks(micros) r:45 nreturned:1 reslen:59 0ms
m30998| Thu Jun 14 01:44:21 [conn] resetting shard version of foo.coll2 on localhost:30000, version is zero
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0000 localhost:30000 foo.coll2 { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0f8e88
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] run command admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command admin.$cmd command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:52 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0001 localhost:30001 foo.coll2 { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } 0xa0f7e18
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] run command admin.$cmd { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command admin.$cmd command: { setShardVersion: "foo.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85c'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) r:45 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called foo.coll2 {}
m30001| Thu Jun 14 01:44:21 [conn6] query foo.coll2 ntoreturn:1 keyUpdates:0 locks(micros) r:76 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:48948 #7 (6 connections now open)
m30001| Thu Jun 14 01:44:21 [conn7] runQuery called foo.$cmd { count: "system.indexes", query: {}, fields: {} }
m30001| Thu Jun 14 01:44:21 [conn7] run command foo.$cmd { count: "system.indexes", query: {}, fields: {} }
m30001| Thu Jun 14 01:44:21 [conn7] command foo.$cmd command: { count: "system.indexes", query: {}, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:21 reslen:48 0ms
m30000| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:60374 #14 (14 connections now open)
m30000| Thu Jun 14 01:44:21 [conn14] runQuery called foo.$cmd { count: "system.indexes", query: {}, fields: {} }
m30000| Thu Jun 14 01:44:21 [conn14] run command foo.$cmd { count: "system.indexes", query: {}, fields: {} }
m30000| Thu Jun 14 01:44:21 [conn14] command foo.$cmd command: { count: "system.indexes", query: {}, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:21 reslen:48 0ms
----
Running movePrimary for bar through mongosB ...
----
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.databases { _id: "bar" }
m30000| Thu Jun 14 01:44:21 [conn9] query config.databases query: { _id: "bar" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1311 w:116 reslen:75 0ms
m30998| Thu Jun 14 01:44:21 [conn] DBConfig unserialize: bar { _id: "bar", partitioned: true, primary: "shard0000" }
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.collections { _id: /^bar\./ }
m30000| Thu Jun 14 01:44:21 [conn9] query config.collections query: { _id: /^bar\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:1526 w:116 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.chunks { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn8] query config.chunks query: { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:376 w:390 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for bar.coll1: 0ms sequenceNumber: 4 version: 1|0||4fd97a330955c08e55c3f85d based on: (empty)
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.chunks { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn8] query config.chunks query: { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:543 w:390 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for bar.coll2: 0ms sequenceNumber: 5 version: 1|0||4fd97a330955c08e55c3f85e based on: (empty)
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called bar.coll0 {}
m30000| Thu Jun 14 01:44:21 [conn13] query bar.coll0 ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:77 nreturned:1 reslen:59 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0000 localhost:30000 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0f8e88
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:21 [conn13] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:77 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:21 [conn] resetting shard version of bar.coll1 on localhost:30001, version is zero
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion shard0001 localhost:30001 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } 0xa0f7e18
m30001| Thu Jun 14 01:44:21 [conn6] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:21 [conn6] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:5 r:76 reslen:86 0ms
m30998| Thu Jun 14 01:44:21 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:44:21 [conn13] runQuery called bar.coll1 {}
m30000| Thu Jun 14 01:44:21 [conn13] query bar.coll1 ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:126 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.databases { _id: "admin" }
m30000| Thu Jun 14 01:44:21 [conn9] query config.databases query: { _id: "admin" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1549 w:116 reslen:74 0ms
m30998| Thu Jun 14 01:44:21 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.collections { _id: /^admin\./ }
m30000| Thu Jun 14 01:44:21 [conn9] query config.collections query: { _id: /^admin\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:1701 w:116 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.databases { _id: "bar" }
m30000| Thu Jun 14 01:44:21 [conn9] query config.databases query: { _id: "bar" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1718 w:116 reslen:75 0ms
m30998| Thu Jun 14 01:44:21 [conn] DBConfig unserialize: bar { _id: "bar", partitioned: true, primary: "shard0000" }
m30000| Thu Jun 14 01:44:21 [conn9] runQuery called config.collections { _id: /^bar\./ }
m30000| Thu Jun 14 01:44:21 [conn9] query config.collections query: { _id: /^bar\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:1845 w:116 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.chunks { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn8] query config.chunks query: { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:733 w:390 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for bar.coll1: 1ms sequenceNumber: 6 version: 1|0||4fd97a330955c08e55c3f85d based on: (empty)
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.chunks { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:21 [conn8] query config.chunks query: { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) r:949 w:390 nreturned:1 reslen:167 0ms
m30998| Thu Jun 14 01:44:21 [conn] ChunkManager: time to load chunks for bar.coll2: 0ms sequenceNumber: 7 version: 1|0||4fd97a330955c08e55c3f85e based on: (empty)
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.shards { host: "localhost:30001" }
m30000| Thu Jun 14 01:44:21 [conn8] query config.shards query: { host: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) r:1027 w:390 nreturned:1 reslen:70 0ms
m30998| Thu Jun 14 01:44:21 [conn] Moving bar primary from: shard0000:localhost:30000 to: shard0001:localhost:30001
m30998| Thu Jun 14 01:44:21 [conn] created new distributed lock for bar-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.locks { _id: "bar-movePrimary" }
m30000| Thu Jun 14 01:44:21 [conn8] query config.locks query: { _id: "bar-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1047 w:390 reslen:20 0ms
m30998| Thu Jun 14 01:44:21 [conn] inserting initial doc in config.locks for lock bar-movePrimary
m30000| Thu Jun 14 01:44:21 [conn8] insert config.locks keyUpdates:0 locks(micros) r:1047 w:466 0ms
m30998| Thu Jun 14 01:44:21 [conn] about to acquire distributed lock 'bar-movePrimary/domU-12-31-39-01-70-B4:30998:1339652657:1804289383:
m30998| { "state" : 1,
m30998| "who" : "domU-12-31-39-01-70-B4:30998:1339652657:1804289383:conn:596516649",
m30998| "process" : "domU-12-31-39-01-70-B4:30998:1339652657:1804289383",
m30998| "when" : { "$date" : "Thu Jun 14 01:44:21 2012" },
m30998| "why" : "Moving primary shard of bar",
m30998| "ts" : { "$oid" : "4fd97a351d2308d2e916ca02" } }
m30998| { "_id" : "bar-movePrimary",
m30998| "state" : 0 }
m30000| Thu Jun 14 01:44:21 [conn8] running multiple plans
m30000| Thu Jun 14 01:44:21 [conn8] update config.locks query: { _id: "bar-movePrimary", state: 0 } update: { $set: { state: 1, who: "domU-12-31-39-01-70-B4:30998:1339652657:1804289383:conn:596516649", process: "domU-12-31-39-01-70-B4:30998:1339652657:1804289383", when: new Date(1339652661025), why: "Moving primary shard of bar", ts: ObjectId('4fd97a351d2308d2e916ca02') } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) r:1047 w:757 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn8] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn8] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1047 w:757 reslen:85 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.locks { _id: "bar-movePrimary" }
m30000| Thu Jun 14 01:44:21 [conn8] query config.locks query: { _id: "bar-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1065 w:757 reslen:267 0ms
m30000| Thu Jun 14 01:44:21 [conn8] update config.locks query: { _id: "bar-movePrimary" } update: { $set: { state: 2, who: "domU-12-31-39-01-70-B4:30998:1339652657:1804289383:conn:596516649", process: "domU-12-31-39-01-70-B4:30998:1339652657:1804289383", when: new Date(1339652661025), why: "Moving primary shard of bar", ts: ObjectId('4fd97a351d2308d2e916ca02') } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:1065 w:830 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn8] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:21 [conn8] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1065 w:830 reslen:85 0ms
m30000| Thu Jun 14 01:44:21 [conn8] runQuery called config.locks { _id: "bar-movePrimary" }
m30000| Thu Jun 14 01:44:21 [conn8] query config.locks query: { _id: "bar-movePrimary" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:1079 w:830 reslen:267 0ms
m30998| Thu Jun 14 01:44:21 [conn] distributed lock 'bar-movePrimary/domU-12-31-39-01-70-B4:30998:1339652657:1804289383' acquired, ts : 4fd97a351d2308d2e916ca02
m30998| Thu Jun 14 01:44:21 [conn] Coll : bar.coll1 sharded? 1
m30998| Thu Jun 14 01:44:21 [conn] Coll : bar.coll2 sharded? 1
m30998| Thu Jun 14 01:44:21 [conn] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: WriteBackListener-localhost:30001
m30998| Thu Jun 14 01:44:21 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:48950 #8 (7 connections now open)
m30998| Thu Jun 14 01:44:21 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:48951 #9 (8 connections now open)
m30998| Thu Jun 14 01:44:21 [conn] connected connection!
m30001| Thu Jun 14 01:44:21 [conn8] runQuery called bar.$cmd { clone: "localhost:30000", collsToIgnore: [ "bar.coll1", "bar.coll2" ] }
m30001| Thu Jun 14 01:44:21 [conn8] run command bar.$cmd { clone: "localhost:30000", collsToIgnore: [ "bar.coll1", "bar.coll2" ] }
m30001| Thu Jun 14 01:44:21 [conn8] opening db: bar
m30001| Thu Jun 14 01:44:21 [conn8] creating new connection to:localhost:30000
m30001| Thu Jun 14 01:44:21 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:21 [conn8] connected connection!
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.system.indexes" }
m30001| Thu Jun 14 01:44:21 [conn8] not cloning because system collection
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll0.$_id_" }
m30001| Thu Jun 14 01:44:21 [conn8] not cloning because has $
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll0" }
m30001| Thu Jun 14 01:44:21 [conn8] not ignoring collection bar.coll0
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll1.$_id_" }
m30001| Thu Jun 14 01:44:21 [conn8] not cloning because has $
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll1" }
m30001| Thu Jun 14 01:44:21 [conn8] ignoring collection bar.coll1
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll2.$_id_" }
m30001| Thu Jun 14 01:44:21 [conn8] not cloning because has $
m30001| Thu Jun 14 01:44:21 [conn8] cloner got { name: "bar.coll2" }
m30001| Thu Jun 14 01:44:21 [conn8] ignoring collection bar.coll2
m30001| Thu Jun 14 01:44:21 [conn8] really will clone: { name: "bar.coll0" }
m30001| Thu Jun 14 01:44:21 [conn8] create collection bar.coll0 {}
m30000| Thu Jun 14 01:44:21 [initandlisten] connection accepted from 127.0.0.1:60377 #15 (15 connections now open)
m30000| Thu Jun 14 01:44:21 [conn15] runQuery called bar.system.namespaces {}
m30000| Thu Jun 14 01:44:21 [conn15] query bar.system.namespaces ntoreturn:0 keyUpdates:0 locks(micros) r:46 nreturned:7 reslen:222 0ms
m30001| Thu Jun 14 01:44:21 [FileAllocator] allocating new datafile /data/db/test1/bar.ns, filling with zeroes...
m30001| Thu Jun 14 01:44:21 [FileAllocator] flushing directory /data/db/test1
m30998| Thu Jun 14 01:44:21 [WriteBackListener-localhost:30001] connected connection!
m30001| Thu Jun 14 01:44:21 [conn9] runQuery called admin.$cmd { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30001| Thu Jun 14 01:44:21 [conn9] run command admin.$cmd { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30001| Thu Jun 14 01:44:21 [conn9] command: { writebacklisten: ObjectId('4fd97a311d2308d2e916ca00') }
m30000| Thu Jun 14 01:44:21 [FileAllocator] flushing directory /data/db/test0
m30001| Thu Jun 14 01:44:21 [FileAllocator] done allocating datafile /data/db/test1/bar.ns, size: 16MB, took 0.852 secs
m30001| Thu Jun 14 01:44:21 [FileAllocator] allocating new datafile /data/db/test1/bar.0, filling with zeroes...
m30000| Thu Jun 14 01:44:21 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.886 secs
m30001| Thu Jun 14 01:44:21 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:22 [FileAllocator] done allocating datafile /data/db/test1/bar.0, size: 16MB, took 0.267 secs
m30001| Thu Jun 14 01:44:22 [conn8] allocExtent bar.coll0 size 8192 0
m30001| Thu Jun 14 01:44:22 [conn8] New namespace: bar.coll0
m30001| Thu Jun 14 01:44:22 [conn8] allocExtent bar.system.namespaces size 1536 0
m30001| Thu Jun 14 01:44:22 [conn8] New namespace: bar.system.namespaces
m30001| Thu Jun 14 01:44:22 [conn8] cloning bar.coll0 -> bar.coll0
m30001| Thu Jun 14 01:44:22 [conn8] cloning collection bar.coll0 to bar.coll0 on localhost:30000 with filter { query: {}, $snapshot: true }
m30001| Thu Jun 14 01:44:22 [conn8] adding _id index for collection bar.coll0
m30001| Thu Jun 14 01:44:22 [conn8] allocExtent bar.system.indexes size 3584 0
m30001| Thu Jun 14 01:44:22 [conn8] New namespace: bar.system.indexes
m30001| Thu Jun 14 01:44:22 [conn8] build index bar.coll0 { _id: 1 }
m30001| mem info: before index start vsize: 190 resident: 49 mapped: 64
m30001| Thu Jun 14 01:44:22 [conn8] external sort root: /data/db/test1/_tmp/esort.1339652662.3/
m30001| mem info: before final sort vsize: 190 resident: 49 mapped: 64
m30001| Thu Jun 14 01:44:22 [conn8] not using file. size:35 _compares:0
m30001| mem info: after final sort vsize: 190 resident: 49 mapped: 64
m30001| Thu Jun 14 01:44:22 [conn8] external sort used : 0 files in 0 secs
m30001| Thu Jun 14 01:44:22 [conn8] allocExtent bar.coll0.$_id_ size 36864 0
m30001| Thu Jun 14 01:44:22 [conn8] New namespace: bar.coll0.$_id_
m30001| Thu Jun 14 01:44:22 [conn8] done building bottom layer, going to commit
m30001| Thu Jun 14 01:44:22 [conn8] fastBuildIndex dupsToDrop:0
m30001| Thu Jun 14 01:44:22 [conn8] build index done. scanned 1 total records. 0 secs
m30001| Thu Jun 14 01:44:22 [conn8] cloning collection bar.system.indexes to bar.system.indexes on localhost:30000 with filter { name: { $ne: "_id_" }, ns: { $nin: [ "bar.coll1", "bar.coll2" ] } }
m30001| Thu Jun 14 01:44:22 [conn8] command bar.$cmd command: { clone: "localhost:30000", collsToIgnore: [ "bar.coll1", "bar.coll2" ] } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:20 w:1131472 reslen:72 1132ms
m30001| Thu Jun 14 01:44:22 [FileAllocator] allocating new datafile /data/db/test1/bar.1, filling with zeroes...
{ "primary " : "shard0001:localhost:30001", "ok" : 1 }
----
Run!
----
m30001| Thu Jun 14 01:44:22 [conn3] runQuery called bar.coll0 {}
m30001| Thu Jun 14 01:44:22 [conn3] query bar.coll0 ntoreturn:1 keyUpdates:0 locks(micros) W:286 r:108 w:705648 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:22 [conn6] runQuery called bar.coll0 {}
m30001| Thu Jun 14 01:44:22 [conn6] query bar.coll0 ntoreturn:1 keyUpdates:0 locks(micros) W:5 r:98 nreturned:1 reslen:59 0ms
m30001| Thu Jun 14 01:44:22 [conn6] runQuery called admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:22 [conn6] run command admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:22 [conn6] command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" }
m30001| Thu Jun 14 01:44:22 [conn6] command admin.$cmd command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:9 r:98 reslen:86 0ms
m30001| Thu Jun 14 01:44:22 [initandlisten] connection accepted from 127.0.0.1:48954 #10 (9 connections now open)
m30001| Thu Jun 14 01:44:22 [conn10] runQuery called bar.$cmd { count: "system.indexes", query: {}, fields: {} }
m30001| Thu Jun 14 01:44:22 [conn10] run command bar.$cmd { count: "system.indexes", query: {}, fields: {} }
m30001| Thu Jun 14 01:44:22 [conn10] command bar.$cmd command: { count: "system.indexes", query: {}, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:18 reslen:48 0ms
m30999| Thu Jun 14 01:44:22 [conn] DBConfig unserialize: bar { _id: "bar", partitioned: true, primary: "shard0001" }
m30999| Thu Jun 14 01:44:22 [conn] ChunkManager: time to load chunks for bar.coll1: 0ms sequenceNumber: 8 version: 1|0||4fd97a330955c08e55c3f85d based on: (empty)
m30999| Thu Jun 14 01:44:22 [conn] ChunkManager: time to load chunks for bar.coll2: 0ms sequenceNumber: 9 version: 1|0||4fd97a330955c08e55c3f85e based on: (empty)
m30999| Thu Jun 14 01:44:22 [conn] setShardVersion shard0000 localhost:30000 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:22 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), ok: 1.0 }
m30999| Thu Jun 14 01:44:22 [conn] setShardVersion shard0000 localhost:30000 bar.coll2 { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } 0x8ee5858
m30999| Thu Jun 14 01:44:22 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), ok: 1.0 }
m30999| Thu Jun 14 01:44:22 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30998| Thu Jun 14 01:44:22 [conn] movePrimary dropping cloned collection bar.coll0 on localhost:30000
m30998| Thu Jun 14 01:44:22 [conn] distributed lock 'bar-movePrimary/domU-12-31-39-01-70-B4:30998:1339652657:1804289383' unlocked.
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion shard0000 localhost:30000 bar.coll1 { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0f8e88
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), ok: 1.0 }
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion shard0000 localhost:30000 bar.coll2 { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } 0xa0f8e88
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30998| Thu Jun 14 01:44:22 [conn] resetting shard version of bar.coll2 on localhost:30001, version is zero
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion shard0001 localhost:30001 bar.coll2 { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0001", shardHost: "localhost:30001" } 0xa0f7e18
m30998| Thu Jun 14 01:44:22 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Thu Jun 14 01:44:22 [conn15] runQuery called admin.$cmd { availablequeryoptions: 1 }
m30000| Thu Jun 14 01:44:22 [conn15] run command admin.$cmd { availablequeryoptions: 1 }
m30000| Thu Jun 14 01:44:22 [conn15] command admin.$cmd command: { availablequeryoptions: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:46 reslen:50 0ms
m30000| Thu Jun 14 01:44:22 [conn15] runQuery called bar.coll0 { query: {}, $snapshot: true }
m30000| Thu Jun 14 01:44:22 [conn15] query bar.coll0 query: { query: {}, $snapshot: true } ntoreturn:0 keyUpdates:0 locks(micros) r:170 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:22 [conn15] runQuery called bar.system.indexes { name: { $ne: "_id_" }, ns: { $nin: [ "bar.coll1", "bar.coll2" ] } }
m30000| Thu Jun 14 01:44:22 [conn15] query bar.system.indexes query: { name: { $ne: "_id_" }, ns: { $nin: [ "bar.coll1", "bar.coll2" ] } } ntoreturn:0 keyUpdates:0 locks(micros) r:301 nreturned:0 reslen:20 0ms
m30000| Thu Jun 14 01:44:22 [conn15] SocketException: remote: 127.0.0.1:60377 error: 9001 socket exception [0] server [127.0.0.1:60377]
m30000| Thu Jun 14 01:44:22 [conn15] end connection 127.0.0.1:60377 (14 connections now open)
m30000| Thu Jun 14 01:44:22 [conn9] update config.databases query: { _id: "bar" } update: { _id: "bar", partitioned: true, primary: "shard0001" } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:1845 w:179 0ms
m30000| Thu Jun 14 01:44:22 [conn9] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:22 [conn9] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:22 [conn9] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1845 w:179 reslen:85 0ms
m30000| Thu Jun 14 01:44:22 [conn8] runQuery called bar.$cmd { drop: "coll0" }
m30000| Thu Jun 14 01:44:22 [conn8] run command bar.$cmd { drop: "coll0" }
m30000| Thu Jun 14 01:44:22 [conn8] CMD: drop bar.coll0
m30000| Thu Jun 14 01:44:22 [conn8] dropCollection: bar.coll0
m30000| Thu Jun 14 01:44:22 [conn8] create collection bar.$freelist {}
m30000| Thu Jun 14 01:44:22 [conn8] allocExtent bar.$freelist size 8192 0
m30000| Thu Jun 14 01:44:22 [conn8] dropIndexes done
m30000| Thu Jun 14 01:44:22 [conn8] command bar.$cmd command: { drop: "coll0" } ntoreturn:1 keyUpdates:0 locks(micros) r:1079 w:1449 reslen:116 0ms
m30000| Thu Jun 14 01:44:22 [conn8] running multiple plans
m30000| Thu Jun 14 01:44:22 [conn8] update config.locks query: { _id: "bar-movePrimary", ts: ObjectId('4fd97a351d2308d2e916ca02') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) r:1079 w:1614 0ms
m30000| Thu Jun 14 01:44:22 [conn8] runQuery called admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:22 [conn8] run command admin.$cmd { getlasterror: 1 }
m30000| Thu Jun 14 01:44:22 [conn8] command admin.$cmd command: { getlasterror: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:1079 w:1614 reslen:85 0ms
m30000| Thu Jun 14 01:44:22 [conn3] runQuery called config.databases { _id: "bar" }
m30000| Thu Jun 14 01:44:22 [conn3] query config.databases query: { _id: "bar" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:2448 w:6098 reslen:75 0ms
m30000| Thu Jun 14 01:44:22 [conn3] runQuery called config.collections { _id: /^bar\./ }
m30000| Thu Jun 14 01:44:22 [conn3] query config.collections query: { _id: /^bar\./ } ntoreturn:0 keyUpdates:0 locks(micros) r:2622 w:6098 nreturned:2 reslen:238 0ms
m30000| Thu Jun 14 01:44:22 [conn5] runQuery called config.chunks { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:22 [conn5] query config.chunks query: { $or: [ { ns: "bar.coll1", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) W:24 r:2642 w:1175615 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:22 [conn5] runQuery called config.chunks { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] }
m30000| Thu Jun 14 01:44:22 [conn5] query config.chunks query: { $or: [ { ns: "bar.coll2", lastmod: { $gte: Timestamp 0|0 } } ] } ntoreturn:1000000 keyUpdates:0 locks(micros) W:24 r:2825 w:1175615 nreturned:1 reslen:167 0ms
m30000| Thu Jun 14 01:44:22 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25890 w:1196023 reslen:86 0ms
m30000| Thu Jun 14 01:44:22 [conn10] runQuery called bar.coll1 {}
m30000| Thu Jun 14 01:44:22 [conn10] query bar.coll1 ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25923 w:1196023 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:22 [conn10] runQuery called admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] run command admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn10] command admin.$cmd command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a310955c08e55c3f859'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25923 w:1196023 reslen:86 0ms
m30000| Thu Jun 14 01:44:22 [conn10] runQuery called bar.coll2 {}
m30000| Thu Jun 14 01:44:22 [conn10] query bar.coll2 ntoreturn:1 keyUpdates:0 locks(micros) W:104 r:25948 w:1196023 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:22 [conn13] runQuery called admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] run command admin.$cmd { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] command admin.$cmd command: { setShardVersion: "bar.coll1", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85d'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:126 reslen:86 0ms
m30000| Thu Jun 14 01:44:22 [conn13] runQuery called bar.coll1 {}
m30000| Thu Jun 14 01:44:22 [conn13] query bar.coll1 ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:151 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:22 [conn13] runQuery called admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] run command admin.$cmd { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" }
m30000| Thu Jun 14 01:44:22 [conn13] command admin.$cmd command: { setShardVersion: "bar.coll2", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a330955c08e55c3f85e'), serverID: ObjectId('4fd97a311d2308d2e916ca00'), shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:151 reslen:86 0ms
m30000| Thu Jun 14 01:44:22 [conn13] runQuery called bar.coll2 {}
m30000| Thu Jun 14 01:44:22 [conn13] query bar.coll2 ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:176 nreturned:1 reslen:59 0ms
m30000| Thu Jun 14 01:44:22 [initandlisten] connection accepted from 127.0.0.1:60378 #16 (15 connections now open)
m30000| Thu Jun 14 01:44:22 [conn16] runQuery called bar.$cmd { count: "system.indexes", query: {}, fields: {} }
m30000| Thu Jun 14 01:44:22 [conn16] run command bar.$cmd { count: "system.indexes", query: {}, fields: {} }
m30000| Thu Jun 14 01:44:22 [conn16] command bar.$cmd command: { count: "system.indexes", query: {}, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:21 reslen:48 0ms
m30000| Thu Jun 14 01:44:22 [conn3] SocketException: remote: 127.0.0.1:60352 error: 9001 socket exception [0] server [127.0.0.1:60352]
m30000| Thu Jun 14 01:44:22 [conn3] end connection 127.0.0.1:60352 (14 connections now open)
m30000| Thu Jun 14 01:44:22 [conn5] SocketException: remote: 127.0.0.1:60356 error: 9001 socket exception [0] server [127.0.0.1:60356]
m30000| Thu Jun 14 01:44:22 [conn5] end connection 127.0.0.1:60356 (13 connections now open)
m30000| Thu Jun 14 01:44:22 [conn12] SocketException: remote: 127.0.0.1:60369 error: 9001 socket exception [0] server [127.0.0.1:60369]
m30000| Thu Jun 14 01:44:22 [conn12] end connection 127.0.0.1:60369 (12 connections now open)
m30000| Thu Jun 14 01:44:22 [conn10] SocketException: remote: 127.0.0.1:60365 error: 9001 socket exception [0] server [127.0.0.1:60365]
m30000| Thu Jun 14 01:44:22 [conn10] end connection 127.0.0.1:60365 (12 connections now open)
m30001| Thu Jun 14 01:44:22 [conn4] SocketException: remote: 127.0.0.1:48942 error: 9001 socket exception [0] server [127.0.0.1:48942]
m30001| Thu Jun 14 01:44:22 [conn4] end connection 127.0.0.1:48942 (8 connections now open)
m30001| Thu Jun 14 01:44:22 [conn3] SocketException: remote: 127.0.0.1:48941 error: 9001 socket exception [0] server [127.0.0.1:48941]
m30001| Thu Jun 14 01:44:22 [conn3] end connection 127.0.0.1:48941 (7 connections now open)
m30001| Thu Jun 14 01:44:22 [FileAllocator] flushing directory /data/db/test1
m30001| Thu Jun 14 01:44:22 [FileAllocator] done allocating datafile /data/db/test1/bar.1, size: 32MB, took 0.625 secs
Thu Jun 14 01:44:23 shell: stopped mongo program on port 30999
m30998| Thu Jun 14 01:44:23 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Thu Jun 14 01:44:23 [conn9] SocketException: remote: 127.0.0.1:60362 error: 9001 socket exception [0] server [127.0.0.1:60362]
m30000| Thu Jun 14 01:44:23 [conn9] end connection 127.0.0.1:60362 (10 connections now open)
m30000| Thu Jun 14 01:44:23 [conn6] SocketException: remote: 127.0.0.1:60358 error: 9001 socket exception [0] server [127.0.0.1:60358]
m30000| Thu Jun 14 01:44:23 [conn6] end connection 127.0.0.1:60358 (9 connections now open)
m30000| Thu Jun 14 01:44:23 [conn8] SocketException: remote: 127.0.0.1:60361 error: 9001 socket exception [0] server [127.0.0.1:60361]
m30000| Thu Jun 14 01:44:23 [conn8] end connection 127.0.0.1:60361 (9 connections now open)
m30000| Thu Jun 14 01:44:23 [conn13] SocketException: remote: 127.0.0.1:60371 error: 9001 socket exception [0] server [127.0.0.1:60371]
m30000| Thu Jun 14 01:44:23 [conn13] end connection 127.0.0.1:60371 (7 connections now open)
m30001| Thu Jun 14 01:44:23 [conn6] SocketException: remote: 127.0.0.1:48947 error: 9001 socket exception [0] server [127.0.0.1:48947]
m30001| Thu Jun 14 01:44:23 [conn6] end connection 127.0.0.1:48947 (6 connections now open)
m30001| Thu Jun 14 01:44:23 [conn8] SocketException: remote: 127.0.0.1:48950 error: 9001 socket exception [0] server [127.0.0.1:48950]
m30001| Thu Jun 14 01:44:23 [conn8] end connection 127.0.0.1:48950 (5 connections now open)
Thu Jun 14 01:44:24 shell: stopped mongo program on port 30998
m30000| Thu Jun 14 01:44:24 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:44:24 [interruptThread] now exiting
m30000| Thu Jun 14 01:44:24 dbexit:
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:44:24 [interruptThread] closing listening socket: 12
m30000| Thu Jun 14 01:44:24 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:44:24 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:44:24 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:44:24 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:44:24 [interruptThread] shutdown: groupCommitMutex
m30000| Thu Jun 14 01:44:24 dbexit: really exiting now
Thu Jun 14 01:44:25 shell: stopped mongo program on port 30000
m30001| Thu Jun 14 01:44:25 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:44:25 [interruptThread] now exiting
m30001| Thu Jun 14 01:44:25 dbexit:
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:44:25 [interruptThread] closing listening socket: 16
m30001| Thu Jun 14 01:44:25 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:44:25 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:44:25 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:44:25 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:44:25 [interruptThread] shutdown: groupCommitMutex
m30001| Thu Jun 14 01:44:25 dbexit: really exiting now
Thu Jun 14 01:44:26 shell: stopped mongo program on port 30001
*** ShardingTest test completed successfully in 10.158 seconds ***
10219.439983ms
Thu Jun 14 01:44:26 [initandlisten] connection accepted from 127.0.0.1:35036 #51 (5 connections now open)
*******************************************
Test : mrShardedOutput.js ...
Command : /mnt/slaves/Linux_32bit/mongo/mongo --port 27999 --nodb /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js --eval TestData = new Object();TestData.testPath = "/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js";TestData.testFile = "mrShardedOutput.js";TestData.testName = "mrShardedOutput";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Thu Jun 14 01:44:26 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/mrShardedOutput0'
Thu Jun 14 01:44:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/mrShardedOutput0
m30000| Thu Jun 14 01:44:26
m30000| Thu Jun 14 01:44:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Thu Jun 14 01:44:26
m30000| Thu Jun 14 01:44:26 [initandlisten] MongoDB starting : pid=27586 port=30000 dbpath=/data/db/mrShardedOutput0 32-bit host=domU-12-31-39-01-70-B4
m30000| Thu Jun 14 01:44:26 [initandlisten]
m30000| Thu Jun 14 01:44:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Thu Jun 14 01:44:26 [initandlisten] ** Not recommended for production.
m30000| Thu Jun 14 01:44:26 [initandlisten]
m30000| Thu Jun 14 01:44:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Thu Jun 14 01:44:26 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Thu Jun 14 01:44:26 [initandlisten] ** with --journal, the limit is lower
m30000| Thu Jun 14 01:44:26 [initandlisten]
m30000| Thu Jun 14 01:44:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Thu Jun 14 01:44:26 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Thu Jun 14 01:44:26 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30000| Thu Jun 14 01:44:26 [initandlisten] options: { dbpath: "/data/db/mrShardedOutput0", port: 30000 }
m30000| Thu Jun 14 01:44:26 [initandlisten] waiting for connections on port 30000
m30000| Thu Jun 14 01:44:26 [websvr] admin web console waiting for connections on port 31000
Resetting db path '/data/db/mrShardedOutput1'
m30000| Thu Jun 14 01:44:26 [initandlisten] connection accepted from 127.0.0.1:60382 #1 (1 connection now open)
Thu Jun 14 01:44:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/mrShardedOutput1
m30001| Thu Jun 14 01:44:26
m30001| Thu Jun 14 01:44:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Thu Jun 14 01:44:26
m30001| Thu Jun 14 01:44:26 [initandlisten] MongoDB starting : pid=27599 port=30001 dbpath=/data/db/mrShardedOutput1 32-bit host=domU-12-31-39-01-70-B4
m30001| Thu Jun 14 01:44:26 [initandlisten]
m30001| Thu Jun 14 01:44:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Thu Jun 14 01:44:26 [initandlisten] ** Not recommended for production.
m30001| Thu Jun 14 01:44:26 [initandlisten]
m30001| Thu Jun 14 01:44:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Thu Jun 14 01:44:26 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Thu Jun 14 01:44:26 [initandlisten] ** with --journal, the limit is lower
m30001| Thu Jun 14 01:44:26 [initandlisten]
m30001| Thu Jun 14 01:44:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Thu Jun 14 01:44:26 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Thu Jun 14 01:44:26 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30001| Thu Jun 14 01:44:26 [initandlisten] options: { dbpath: "/data/db/mrShardedOutput1", port: 30001 }
m30001| Thu Jun 14 01:44:26 [websvr] admin web console waiting for connections on port 31001
m30001| Thu Jun 14 01:44:26 [initandlisten] waiting for connections on port 30001
"localhost:30000"
ShardingTest mrShardedOutput :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001
]
}
m30000| Thu Jun 14 01:44:26 [initandlisten] connection accepted from 127.0.0.1:60385 #2 (2 connections now open)
m30000| Thu Jun 14 01:44:26 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:26 [FileAllocator] creating directory /data/db/mrShardedOutput0/_tmp
m30001| Thu Jun 14 01:44:26 [initandlisten] connection accepted from 127.0.0.1:48959 #1 (1 connection now open)
Thu Jun 14 01:44:26 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v
m30000| Thu Jun 14 01:44:26 [initandlisten] connection accepted from 127.0.0.1:60386 #3 (3 connections now open)
m30999| Thu Jun 14 01:44:26 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Thu Jun 14 01:44:26 [mongosMain] MongoS version 2.1.2-pre- starting: pid=27613 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage)
m30999| Thu Jun 14 01:44:26 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Thu Jun 14 01:44:26 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
m30999| Thu Jun 14 01:44:26 [mongosMain] options: { configdb: "localhost:30000", port: 30999, verbose: true }
m30999| Thu Jun 14 01:44:26 [mongosMain] config string : localhost:30000
m30999| Thu Jun 14 01:44:26 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:26 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:26 [mongosMain] connected connection!
m30000| Thu Jun 14 01:44:27 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.ns, size: 16MB, took 0.283 secs
m30000| Thu Jun 14 01:44:27 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.0, filling with zeroes...
m30000| Thu Jun 14 01:44:27 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.0, size: 16MB, took 0.275 secs
m30000| Thu Jun 14 01:44:27 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.1, filling with zeroes...
m30000| Thu Jun 14 01:44:27 [conn2] build index config.settings { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn2] insert config.settings keyUpdates:0 locks(micros) w:575541 575ms
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: CheckConfigServers
m30999| Thu Jun 14 01:44:27 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:27 [mongosMain] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:27 [mongosMain] connected connection!
m30999| Thu Jun 14 01:44:27 [CheckConfigServers] connected connection!
m30999| Thu Jun 14 01:44:27 [mongosMain] MaxChunkSize: 1
m30999| Thu Jun 14 01:44:27 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:44:27 [mongosMain] waiting for connections on port 30999
m30999| Thu Jun 14 01:44:27 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Thu Jun 14 01:44:27 [websvr] admin web console waiting for connections on port 31999
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: Balancer
m30999| Thu Jun 14 01:44:27 [Balancer] about to contact config servers and shards
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: cursorTimeout
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: PeriodicTask::Runner
m30000| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:60390 #4 (4 connections now open)
m30000| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:60391 #5 (5 connections now open)
m30000| Thu Jun 14 01:44:27 [conn5] build index config.version { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn3] build index config.chunks { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn3] info: creating collection config.chunks on add index
m30000| Thu Jun 14 01:44:27 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:27 [Balancer] config servers and shards contacted successfully
m30999| Thu Jun 14 01:44:27 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Jun 14 01:44:27
m30999| Thu Jun 14 01:44:27 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:27 [Balancer] creating new connection to:localhost:30000
m30000| Thu Jun 14 01:44:27 [conn3] build index config.shards { _id: 1 }
m30000| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:60392 #6 (6 connections now open)
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn3] info: creating collection config.shards on add index
m30000| Thu Jun 14 01:44:27 [conn3] build index config.shards { host: 1 }
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:27 [Balancer] connected connection!
m30000| Thu Jun 14 01:44:27 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn5] build index config.mongos { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:27 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:44:27 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Thu Jun 14 01:44:27 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:27 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a3b0d2fef4d6a507be1" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:44:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a3b0d2fef4d6a507be1
m30999| Thu Jun 14 01:44:27 [Balancer] *** start balancing round
m30000| Thu Jun 14 01:44:27 [conn6] build index config.locks { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn4] build index config.lockpings { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:27 [conn4] build index config.lockpings { ping: 1 }
m30000| Thu Jun 14 01:44:27 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Thu Jun 14 01:44:27 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1339652667:1804289383 (sleeping for 30000ms)
m30999| Thu Jun 14 01:44:27 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:44:27 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:44:27 [Balancer] no collections to balance
m30999| Thu Jun 14 01:44:27 [Balancer] no need to move any chunk
m30999| Thu Jun 14 01:44:27 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:44:27 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
ShardingTest undefined going to add shard : localhost:30000
m30999| Thu Jun 14 01:44:27 [mongosMain] connection accepted from 127.0.0.1:54455 #1 (1 connection now open)
m30999| Thu Jun 14 01:44:27 [conn] couldn't find database [admin] in config db
m30000| Thu Jun 14 01:44:27 [conn4] build index config.databases { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:44:27 [conn] put [admin] on: config:localhost:30000
m30999| Thu Jun 14 01:44:27 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30999| Thu Jun 14 01:44:27 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:27 [conn] connected connection!
m30001| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:48969 #2 (2 connections now open)
m30999| Thu Jun 14 01:44:27 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Thu Jun 14 01:44:27 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:27 [conn] connected connection!
m30999| Thu Jun 14 01:44:27 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd97a3b0d2fef4d6a507be0
m30999| Thu Jun 14 01:44:27 [conn] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:44:27 [conn] creating new connection to:localhost:30001
m30000| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:60395 #7 (7 connections now open)
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:27 [initandlisten] connection accepted from 127.0.0.1:48971 #3 (3 connections now open)
m30999| Thu Jun 14 01:44:27 [conn] connected connection!
m30999| Thu Jun 14 01:44:27 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd97a3b0d2fef4d6a507be0
m30999| Thu Jun 14 01:44:27 [conn] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:44:27 [conn] couldn't find database [test] in config db
m30999| Thu Jun 14 01:44:27 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0
m30999| Thu Jun 14 01:44:27 [conn] put [test] on: shard0001:localhost:30001
m30999| Thu Jun 14 01:44:27 [conn] enabling sharding on: test
m30999| Thu Jun 14 01:44:27 [conn] CMD: shardcollection: { shardcollection: "test.foo", key: { a: 1.0 } }
m30999| Thu Jun 14 01:44:27 [conn] enable sharding on: test.foo with shard key: { a: 1.0 }
m30999| Thu Jun 14 01:44:27 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:27 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd97a3b0d2fef4d6a507be2 based on: (empty)
m30999| Thu Jun 14 01:44:27 [conn] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:44:27 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:44:27 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:27 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30000| Thu Jun 14 01:44:27 [conn4] build index config.collections { _id: 1 }
m30000| Thu Jun 14 01:44:27 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:27 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.ns, filling with zeroes...
m30001| Thu Jun 14 01:44:27 [FileAllocator] creating directory /data/db/mrShardedOutput1/_tmp
m30999| Thu Jun 14 01:44:27 BackgroundJob starting: WriteBackListener-localhost:30001
m30000| Thu Jun 14 01:44:27 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.1, size: 32MB, took 0.674 secs
m30001| Thu Jun 14 01:44:28 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.ns, size: 16MB, took 0.402 secs
m30001| Thu Jun 14 01:44:28 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.0, filling with zeroes...
m30001| Thu Jun 14 01:44:28 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.0, size: 16MB, took 0.294 secs
m30001| Thu Jun 14 01:44:28 [conn2] build index test.foo { _id: 1 }
m30001| Thu Jun 14 01:44:28 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:28 [conn2] info: creating collection test.foo on add index
m30001| Thu Jun 14 01:44:28 [conn2] build index test.foo { a: 1.0 }
m30001| Thu Jun 14 01:44:28 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:28 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) R:7 W:71 r:254 w:1311875 1311ms
m30001| Thu Jun 14 01:44:28 [conn3] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 locks(micros) W:71 reslen:173 1310ms
m30001| Thu Jun 14 01:44:28 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.1, filling with zeroes...
m30001| Thu Jun 14 01:44:28 [conn3] no current chunk manager found for this shard, will initialize
m30000| Thu Jun 14 01:44:28 [initandlisten] connection accepted from 127.0.0.1:60397 #8 (8 connections now open)
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
ShardingTest test.foo-a_MinKey 1000|0 { "a" : { $minKey : 1 } } -> { "a" : { $maxKey : 1 } } shard0001 test.foo
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey } dataWritten: 59333 splitThreshold: 921
m30999| Thu Jun 14 01:44:28 [conn] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:28 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:28 [initandlisten] connection accepted from 127.0.0.1:48973 #4 (4 connections now open)
m30999| Thu Jun 14 01:44:28 [conn] connected connection!
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey } dataWritten: 1065 splitThreshold: 921
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split { a: 964.526341129859 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey } dataWritten: 1065 splitThreshold: 921
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] warning: chunk is larger than 1024 bytes because of key { a: 327.5292321238884 }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] warning: chunk is larger than 1024 bytes because of key { a: 327.5292321238884 }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] warning: chunk is larger than 1024 bytes because of key { a: 327.5292321238884 }
m30000| Thu Jun 14 01:44:28 [initandlisten] connection accepted from 127.0.0.1:60399 #9 (9 connections now open)
m30001| Thu Jun 14 01:44:28 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 327.5292321238884 } ], shardId: "test.foo-a_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:28 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:28 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1339652668:318525290 (sleeping for 30000ms)
m30001| Thu Jun 14 01:44:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3c32a28802daeedfb8
m30001| Thu Jun 14 01:44:28 [conn4] splitChunk accepted at version 1|0||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:28-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652668698), what: "split", ns: "test.foo", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 327.5292321238884 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:28 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd97a3b0d2fef4d6a507be2 based on: 1|0||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:28 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey } on: { a: 327.5292321238884 } (splitThreshold 921)
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 143367 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 144571 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split { a: 793.510861297377 }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:28 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 998.3975234740553 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:28 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split { a: 694.107598371187 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } dataWritten: 94785 splitThreshold: 471859
m30001| Thu Jun 14 01:44:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3c32a28802daeedfb9
m30001| Thu Jun 14 01:44:28 [conn4] splitChunk accepted at version 1|2||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:28-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652668917), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 998.3975234740553 }, max: { a: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:28 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||4fd97a3b0d2fef4d6a507be2 based on: 1|2||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:28 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: MaxKey } on: { a: 998.3975234740553 } (splitThreshold 471859) (migrate suggested)
m30999| Thu Jun 14 01:44:28 [conn] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0
m30999| Thu Jun 14 01:44:28 [conn] recently split chunk: { min: { a: 998.3975234740553 }, max: { a: MaxKey } } already in the best shard: shard0001:localhost:30001
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:28 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 998.3975234740553 } dataWritten: 210227 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 189429 splitThreshold: 943718
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 998.3975234740553 } dataWritten: 209805 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 998.3975234740553 }
m30999| Thu Jun 14 01:44:28 [conn] chunk not full enough to trigger auto-split { a: 719.2322912380099 }
m30999| Thu Jun 14 01:44:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 998.3975234740553 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||4fd97a3b0d2fef4d6a507be2 based on: 1|4||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 998.3975234740553 } on: { a: 640.7093733209429 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 640.7093733209429 } dataWritten: 210570 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 612.9794265859357 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 189665 splitThreshold: 943718
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 277.9605298892269 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 998.3975234740553 } dataWritten: 209931 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 886.8744443478174 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 640.7093733209429 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 535.7954983246034 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 189570 splitThreshold: 943718
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 208.737391043651 }
m30001| Thu Jun 14 01:44:28 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:28 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:28 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 640.7093733209429 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:28 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:28 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3c32a28802daeedfba
m30001| Thu Jun 14 01:44:28 [conn4] splitChunk accepted at version 1|4||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:28 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:28-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652668999), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 640.7093733209429 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 998.3975234740553 } dataWritten: 209805 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 998.3975234740553 }
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 829.8498435646491 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 640.7093733209429 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 493.8715901955061 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 797.6352444405507 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfbb
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|6||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669177), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 797.6352444405507 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 456.4586339452165 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfbc
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|8||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669225), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 456.4586339452165 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 0.07367152018367129 } ], shardId: "test.foo-a_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfbd
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|10||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669261), what: "split", ns: "test.foo", details: { before: { min: { a: MinKey }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 0.07367152018367129 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 0.07367152018367129 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 123.1918419151289 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfbe
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|12||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669274), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 123.1918419151289 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 123.1918419151289 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 123.1918419151289 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 204.0577089538382 } ], shardId: "test.foo-a_123.1918419151289", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfbf
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|14||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669463), what: "split", ns: "test.foo", details: { before: { min: { a: 123.1918419151289 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 123.1918419151289 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 204.0577089538382 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 456.4586339452165 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 456.4586339452165 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 542.4296058071777 } ], shardId: "test.foo-a_456.4586339452165", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc0
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|16||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669469), what: "split", ns: "test.foo", details: { before: { min: { a: 456.4586339452165 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 456.4586339452165 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 542.4296058071777 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 797.6352444405507 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 797.6352444405507 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 882.331873780809 } ], shardId: "test.foo-a_797.6352444405507", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc1
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|18||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669479), what: "split", ns: "test.foo", details: { before: { min: { a: 797.6352444405507 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 797.6352444405507 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 882.331873780809 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 714.0536251380356 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc2
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|20||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669581), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 714.0536251380356 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:29 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.1, size: 32MB, took 1.084 secs
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 204.0577089538382 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 204.0577089538382 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 264.0825842924789 } ], shardId: "test.foo-a_204.0577089538382", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc3
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|22||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669793), what: "split", ns: "test.foo", details: { before: { min: { a: 204.0577089538382 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 204.0577089538382 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 264.0825842924789 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 57.56464668319472 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc4
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|24||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669833), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 57.56464668319472 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 189570 splitThreshold: 943718
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 169.5012683078006 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 998.3975234740553 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||4fd97a3b0d2fef4d6a507be2 based on: 1|6||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 998.3975234740553 } on: { a: 797.6352444405507 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 998.3975234740553 } dataWritten: 209979 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 951.0322846174492 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 640.7093733209429 } dataWritten: 210136 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||4fd97a3b0d2fef4d6a507be2 based on: 1|8||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 640.7093733209429 } on: { a: 456.4586339452165 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 998.3975234740553 } dataWritten: 210475 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 941.9286263109739 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } dataWritten: 189422 splitThreshold: 943718
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||4fd97a3b0d2fef4d6a507be2 based on: 1|10||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { a: MinKey } max: { a: 327.5292321238884 } on: { a: 0.07367152018367129 } (splitThreshold 943718)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|12, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 327.5292321238884 } dataWritten: 209859 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||4fd97a3b0d2fef4d6a507be2 based on: 1|12||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 327.5292321238884 } on: { a: 123.1918419151289 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|14, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 327.5292321238884 } dataWritten: 210165 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 224.8215236005806 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 640.7093733209429 } dataWritten: 210554 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 556.1662326352625 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 998.3975234740553 } dataWritten: 209873 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 899.7790163010509 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 797.6352444405507 } dataWritten: 210697 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 737.458209758317 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|13||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 123.1918419151289 } dataWritten: 209840 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 96.08437095683331 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|9||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 456.4586339452165 } dataWritten: 210044 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 416.9952911207869 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 327.5292321238884 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||4fd97a3b0d2fef4d6a507be2 based on: 1|14||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 327.5292321238884 } on: { a: 204.0577089538382 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|16, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 640.7093733209429 } dataWritten: 209861 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||4fd97a3b0d2fef4d6a507be2 based on: 1|16||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 640.7093733209429 } on: { a: 542.4296058071777 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|18, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 998.3975234740553 } dataWritten: 209754 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||4fd97a3b0d2fef4d6a507be2 based on: 1|18||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 998.3975234740553 } on: { a: 882.331873780809 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|20, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } dataWritten: 210638 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 881.4897688806171 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 640.7093733209429 } dataWritten: 210711 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 612.3436030165759 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 327.5292321238884 } dataWritten: 209892 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 279.5203552667873 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 797.6352444405507 } dataWritten: 209765 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||4fd97a3b0d2fef4d6a507be2 based on: 1|20||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 797.6352444405507 } on: { a: 714.0536251380356 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|22, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 998.3975234740553 } dataWritten: 209806 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 962.5433779687389 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } dataWritten: 209973 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 871.1841452947203 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 327.5292321238884 } dataWritten: 210014 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 274.4427487007536 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 640.7093733209429 } dataWritten: 209850 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 606.3963089215079 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|13||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 123.1918419151289 } dataWritten: 210123 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 70.64327720592867 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|9||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 456.4586339452165 } dataWritten: 210604 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 392.639416276946 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|17||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 542.4296058071777 } dataWritten: 209907 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 517.7724961604476 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 797.6352444405507 } dataWritten: 210474 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 774.8842364749855 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 204.0577089538382 } dataWritten: 209816 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 175.4765246116503 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 998.3975234740553 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 945.4900880800409 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 327.5292321238884 } dataWritten: 209805 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||4fd97a3b0d2fef4d6a507be2 based on: 1|22||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 327.5292321238884 } on: { a: 264.0825842924789 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|24, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|13||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 123.1918419151289 } dataWritten: 210465 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||4fd97a3b0d2fef4d6a507be2 based on: 1|24||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|13||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 123.1918419151289 } on: { a: 57.56464668319472 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|26, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 204.0577089538382 } dataWritten: 210705 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 173.8174829091478 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } dataWritten: 210254 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 260.986092386034 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } dataWritten: 210507 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 852.2047112105078 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|9||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 456.4586339452165 } dataWritten: 209939 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||4fd97a3b0d2fef4d6a507be2 based on: 1|26||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|9||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 456.4586339452165 } on: { a: 378.3565272980204 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|28, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 998.3975234740553 } dataWritten: 210077 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||4fd97a3b0d2fef4d6a507be2 based on: 1|28||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 998.3975234740553 } on: { a: 938.1160661714987 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|30, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 797.6352444405507 } dataWritten: 209911 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 763.7725451843478 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|29||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 938.1160661714987 } dataWritten: 210218 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 937.2243591262148 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 998.3975234740553 } dataWritten: 210736 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] chunk not full enough to trigger auto-split { a: 989.6031645632307 }
m30999| Thu Jun 14 01:44:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 640.7093733209429 } dataWritten: 210701 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:29 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||4fd97a3b0d2fef4d6a507be2 based on: 1|30||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:29 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 640.7093733209429 } on: { a: 590.8997745355827 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|32, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:29 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 378.3565272980204 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc5
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|26||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669896), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 378.3565272980204 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 882.331873780809 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 882.331873780809 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 938.1160661714987 } ], shardId: "test.foo-a_882.331873780809", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc6
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|28||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669924), what: "split", ns: "test.foo", details: { before: { min: { a: 882.331873780809 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 882.331873780809 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 938.1160661714987 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:29 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 542.4296058071777 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:29 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 542.4296058071777 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 590.8997745355827 } ], shardId: "test.foo-a_542.4296058071777", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:29 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3d32a28802daeedfc7
m30001| Thu Jun 14 01:44:29 [conn4] splitChunk accepted at version 1|30||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:29 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:29-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652669967), what: "split", ns: "test.foo", details: { before: { min: { a: 542.4296058071777 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 542.4296058071777 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 590.8997745355827 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:29 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:30 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.2, filling with zeroes...
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } dataWritten: 210385 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 846.0588781675457 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 378.3565272980204 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 123.1918419151289 } dataWritten: 210040 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 100.5491954008688 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 456.4586339452165 } dataWritten: 209897 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 420.9160429476134 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|27||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 378.3565272980204 } dataWritten: 210364 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 369.2971171228998 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 204.0577089538382 } dataWritten: 210704 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 163.2491809287717 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } dataWritten: 210154 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 249.0282536764584 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 797.6352444405507 } dataWritten: 210002 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 757.0124976521969 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 998.3975234740553 } dataWritten: 209816 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 981.480715288323 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|17||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 542.4296058071777 } dataWritten: 210491 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 456.4586339452165 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 456.4586339452165 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 498.2021416153332 } ], shardId: "test.foo-a_456.4586339452165", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfc8
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|32||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670595), what: "split", ns: "test.foo", details: { before: { min: { a: 456.4586339452165 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 456.4586339452165 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 498.2021416153332 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||4fd97a3b0d2fef4d6a507be2 based on: 1|32||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|17||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 542.4296058071777 } on: { a: 498.2021416153332 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|34, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } dataWritten: 209828 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 247.539982125863 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } dataWritten: 210141 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 797.6352444405507 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 797.6352444405507 }, max: { a: 882.331873780809 }, from: "shard0001", splitKeys: [ { a: 840.7121644073931 } ], shardId: "test.foo-a_797.6352444405507", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfc9
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|34||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670623), what: "split", ns: "test.foo", details: { before: { min: { a: 797.6352444405507 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 797.6352444405507 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 840.7121644073931 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||4fd97a3b0d2fef4d6a507be2 based on: 1|34||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|19||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 882.331873780809 } on: { a: 840.7121644073931 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|36, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 456.4586339452165 } dataWritten: 210693 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 456.4586339452165 }
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 416.7465549065427 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|25||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 57.56464668319472 } dataWritten: 210383 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 41.74210535087353 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 797.6352444405507 } dataWritten: 210183 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||4fd97a3b0d2fef4d6a507be2 based on: 1|36||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 797.6352444405507 } on: { a: 752.6019558395919 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|38, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 123.1918419151289 } dataWritten: 210584 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 95.99229493543749 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 204.0577089538382 } dataWritten: 210779 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||4fd97a3b0d2fef4d6a507be2 based on: 1|38||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|15||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 204.0577089538382 } on: { a: 159.2125242384949 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|39||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 159.2125242384949 } dataWritten: 210162 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 157.8949497133646 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 204.0577089538382 } dataWritten: 210239 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 197.4462093789416 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 123.1918419151289 } dataWritten: 210280 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 93.96764702842519 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 714.0536251380356 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 714.0536251380356 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 752.6019558395919 } ], shardId: "test.foo-a_714.0536251380356", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfca
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|36||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670674), what: "split", ns: "test.foo", details: { before: { min: { a: 714.0536251380356 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 714.0536251380356 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 752.6019558395919 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 123.1918419151289 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 123.1918419151289 }, max: { a: 204.0577089538382 }, from: "shard0001", splitKeys: [ { a: 159.2125242384949 } ], shardId: "test.foo-a_123.1918419151289", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfcb
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|38||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670687), what: "split", ns: "test.foo", details: { before: { min: { a: 123.1918419151289 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 123.1918419151289 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 159.2125242384949 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 123.1918419151289 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|21||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 714.0536251380356 } dataWritten: 209931 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 1|42||4fd97a3b0d2fef4d6a507be2 based on: 1|40||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|21||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 714.0536251380356 } on: { a: 678.3563510786536 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|42, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 542.4296058071777 } dataWritten: 209826 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 532.63996597738 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 327.5292321238884 } dataWritten: 209978 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 300.5213739365524 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 882.331873780809 } dataWritten: 210431 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 872.6105215153633 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|25||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 57.56464668319472 } dataWritten: 210771 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 36.55231299458339 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 714.0536251380356 }, from: "shard0001", splitKeys: [ { a: 678.3563510786536 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfcc
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|40||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670804), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 678.3563510786536 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 57.56464668319472 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|31||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 590.8997745355827 } dataWritten: 209740 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 590.8997745355827 }
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 573.2492897556983 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 456.4586339452165 } dataWritten: 209763 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:30 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 378.3565272980204 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:30 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 378.3565272980204 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 411.0287894698923 } ], shardId: "test.foo-a_378.3565272980204", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:30 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3e32a28802daeedfcd
m30001| Thu Jun 14 01:44:30 [conn4] splitChunk accepted at version 1|42||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:30 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:30-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652670940), what: "split", ns: "test.foo", details: { before: { min: { a: 378.3565272980204 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 378.3565272980204 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 411.0287894698923 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:30 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:30 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 1|44||4fd97a3b0d2fef4d6a507be2 based on: 1|42||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 456.4586339452165 } on: { a: 411.0287894698923 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|44, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:30 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 542.4296058071777 } dataWritten: 210501 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 542.4296058071777 }
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 530.1142484455189 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } dataWritten: 210060 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 238.5834870463727 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|43||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 411.0287894698923 } dataWritten: 210350 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 410.6013422106015 }
m30999| Thu Jun 14 01:44:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 797.6352444405507 } dataWritten: 210017 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:30 [conn] chunk not full enough to trigger auto-split { a: 785.8363708016027 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 209499 splitThreshold: 943718
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|44||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 456.4586339452165 } dataWritten: 210297 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 441.6582078271719 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } dataWritten: 209781 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 672.4079353257237 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 882.331873780809 } dataWritten: 209983 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 870.3724937715639 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 640.7093733209429 } dataWritten: 210000 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 617.3671550508066 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:30 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|27||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 378.3565272980204 } dataWritten: 210734 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 378.3565272980204 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 356.5292024718596 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|25||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 57.56464668319472 } dataWritten: 210580 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 57.56464668319472 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 32.81513795521851 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 998.3975234740553 } dataWritten: 210294 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 998.3975234740553 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 967.240533226689 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|29||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 938.1160661714987 } dataWritten: 210337 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 938.1160661714987 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 912.7006183678424 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|33||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 498.2021416153332 } dataWritten: 209727 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 486.6080629130133 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 327.5292321238884 } dataWritten: 210445 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 1|46||4fd97a3b0d2fef4d6a507be2 based on: 1|44||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 327.5292321238884 } on: { a: 294.0222214358918 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|46, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 882.331873780809 } dataWritten: 210129 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 867.5607033625913 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|43||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 411.0287894698923 } dataWritten: 210094 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 405.1380559702133 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 204.0577089538382 } dataWritten: 209802 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 186.918068973441 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } dataWritten: 210196 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 264.0825842924789 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 264.0825842924789 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 294.0222214358918 } ], shardId: "test.foo-a_264.0825842924789", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfce
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|44||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671205), what: "split", ns: "test.foo", details: { before: { min: { a: 264.0825842924789 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 264.0825842924789 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 294.0222214358918 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 204.0577089538382 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 204.0577089538382 }, max: { a: 264.0825842924789 }, from: "shard0001", splitKeys: [ { a: 233.8565055904641 } ], shardId: "test.foo-a_204.0577089538382", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfcf
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|46||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671254), what: "split", ns: "test.foo", details: { before: { min: { a: 204.0577089538382 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 204.0577089538382 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 233.8565055904641 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 1|48||4fd97a3b0d2fef4d6a507be2 based on: 1|46||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|23||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 264.0825842924789 } on: { a: 233.8565055904641 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|48, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 998.3975234740553 } dataWritten: 210226 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 1|50||4fd97a3b0d2fef4d6a507be2 based on: 1|48||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 998.3975234740553 } on: { a: 964.9150523226922 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|50, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|44||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 456.4586339452165 } dataWritten: 209950 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 436.9519871287885 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 797.6352444405507 } dataWritten: 209844 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 779.593116343652 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 123.1918419151289 } dataWritten: 210378 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 1|52||4fd97a3b0d2fef4d6a507be2 based on: 1|50||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 123.1918419151289 } on: { a: 83.77384564239721 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|52, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 938.1160661714987 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 938.1160661714987 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 964.9150523226922 } ], shardId: "test.foo-a_938.1160661714987", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd0
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|48||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671310), what: "split", ns: "test.foo", details: { before: { min: { a: 938.1160661714987 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 938.1160661714987 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 964.9150523226922 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 57.56464668319472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 57.56464668319472 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 83.77384564239721 } ], shardId: "test.foo-a_57.56464668319472", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd1
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|50||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671378), what: "split", ns: "test.foo", details: { before: { min: { a: 57.56464668319472 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 57.56464668319472 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 83.77384564239721 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|37||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 752.6019558395919 } dataWritten: 210158 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 752.6019558395919 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 739.0435066660161 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|35||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 840.7121644073931 } dataWritten: 210770 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 825.2725115030458 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|27||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 378.3565272980204 } dataWritten: 210098 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 353.2720479801309 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd2
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|52||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671408), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 353.2720479801309 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 1|54||4fd97a3b0d2fef4d6a507be2 based on: 1|52||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|27||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 378.3565272980204 } on: { a: 353.2720479801309 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|54, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } dataWritten: 210434 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:31 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.2, size: 64MB, took 1.416 secs
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 108.6511897372467 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|11||000000000000000000000000 min: { a: MinKey } max: { a: 0.07367152018367129 } dataWritten: 209297 splitThreshold: 943718
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.07367152018367129 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|39||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 159.2125242384949 } dataWritten: 209715 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 145.2222453321905 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 327.5292321238884 } dataWritten: 210567 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 317.5926938428913 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|25||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 57.56464668319472 } dataWritten: 210143 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 57.56464668319472 }, from: "shard0001", splitKeys: [ { a: 25.60273139230473 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd3
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|54||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671622), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 25.60273139230473 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 1|56||4fd97a3b0d2fef4d6a507be2 based on: 1|54||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|25||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 57.56464668319472 } on: { a: 25.60273139230473 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|56, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|55||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 25.60273139230473 } dataWritten: 209967 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 25.60273139230473 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 25.47298063758607 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } dataWritten: 210362 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 106.8780638618358 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 204.0577089538382 } dataWritten: 210034 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 181.8677157767826 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 327.5292321238884 } dataWritten: 210259 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 316.9842955667323 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|37||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 752.6019558395919 } dataWritten: 209920 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 736.3976537330494 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } dataWritten: 210529 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 663.2882865721875 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|42||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 714.0536251380356 } dataWritten: 210413 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 700.2231836889139 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|31||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 590.8997745355827 } dataWritten: 210399 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 542.4296058071777 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 542.4296058071777 }, max: { a: 590.8997745355827 }, from: "shard0001", splitKeys: [ { a: 563.897889911273 } ], shardId: "test.foo-a_542.4296058071777", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd4
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|56||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671819), what: "split", ns: "test.foo", details: { before: { min: { a: 542.4296058071777 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 542.4296058071777 }, max: { a: 563.897889911273 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 563.897889911273 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 1|58||4fd97a3b0d2fef4d6a507be2 based on: 1|56||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|31||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 590.8997745355827 } on: { a: 563.897889911273 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|58, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|43||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 411.0287894698923 } dataWritten: 209772 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 411.0287894698923 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 400.43980643234 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 640.7093733209429 } dataWritten: 210150 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 590.8997745355827 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 590.8997745355827 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 610.6068178358934 } ], shardId: "test.foo-a_590.8997745355827", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd5
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|58||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671838), what: "split", ns: "test.foo", details: { before: { min: { a: 590.8997745355827 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 590.8997745355827 }, max: { a: 610.6068178358934 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 610.6068178358934 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 1|60||4fd97a3b0d2fef4d6a507be2 based on: 1|58||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 640.7093733209429 } on: { a: 610.6068178358934 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|60, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|60||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 640.7093733209429 } dataWritten: 210140 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 629.7663570305061 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } dataWritten: 210639 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 585.3266904972702 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 209856 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 79.84937053253449 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|29||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 938.1160661714987 } dataWritten: 210012 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:31 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 882.331873780809 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:31 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 882.331873780809 }, max: { a: 938.1160661714987 }, from: "shard0001", splitKeys: [ { a: 905.2934559328332 } ], shardId: "test.foo-a_882.331873780809", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:31 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a3f32a28802daeedfd6
m30001| Thu Jun 14 01:44:31 [conn4] splitChunk accepted at version 1|60||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:31 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:31-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652671923), what: "split", ns: "test.foo", details: { before: { min: { a: 882.331873780809 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 882.331873780809 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 905.2934559328332 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:31 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:31 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 1|62||4fd97a3b0d2fef4d6a507be2 based on: 1|60||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|29||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 938.1160661714987 } on: { a: 905.2934559328332 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|62, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:31 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 938.1160661714987 } dataWritten: 209940 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 938.1160661714987 }
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 926.9978497281315 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|35||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 840.7121644073931 } dataWritten: 210564 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 820.3660857519612 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } dataWritten: 210193 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 104.8766296600855 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:31 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30999| Thu Jun 14 01:44:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } dataWritten: 210116 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:31 [conn] chunk not full enough to trigger auto-split { a: 286.1069031537295 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|49||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 964.9150523226922 } dataWritten: 210423 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 964.9150523226922 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 958.3310026100884 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|42||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 714.0536251380356 } dataWritten: 210278 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 714.0536251380356 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 699.0977205709515 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|60||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 640.7093733209429 } dataWritten: 209931 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 628.7458484365142 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } dataWritten: 209878 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 660.849708006739 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|47||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 233.8565055904641 } dataWritten: 210685 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 224.8600311721443 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|56||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 57.56464668319472 } dataWritten: 210078 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 45.0269490416777 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 797.6352444405507 } dataWritten: 210257 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 1|64||4fd97a3b0d2fef4d6a507be2 based on: 1|62||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 797.6352444405507 } on: { a: 773.3799848158397 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|64, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|57||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 563.897889911273 } dataWritten: 210202 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 562.0426719729961 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 752.6019558395919 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 752.6019558395919 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 773.3799848158397 } ], shardId: "test.foo-a_752.6019558395919", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfd7
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|62||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672179), what: "split", ns: "test.foo", details: { before: { min: { a: 752.6019558395919 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 752.6019558395919 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 773.3799848158397 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 563.897889911273 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } dataWritten: 210123 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 284.5463156570389 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } dataWritten: 210003 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 103.8841293547749 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|53||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 353.2720479801309 } dataWritten: 209740 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 347.1042513195609 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 353.2720479801309 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 193834 splitThreshold: 943718
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|39||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 159.2125242384949 } dataWritten: 210279 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 141.4770818252742 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } dataWritten: 209886 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 659.0694765150853 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 327.5292321238884 } dataWritten: 210063 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 312.3747285252534 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 542.4296058071777 } dataWritten: 210028 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 498.2021416153332 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 498.2021416153332 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 515.6449770586091 } ], shardId: "test.foo-a_498.2021416153332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfd8
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|64||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672457), what: "split", ns: "test.foo", details: { before: { min: { a: 498.2021416153332 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 498.2021416153332 }, max: { a: 515.6449770586091 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 515.6449770586091 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 1|66||4fd97a3b0d2fef4d6a507be2 based on: 1|64||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 542.4296058071777 } on: { a: 515.6449770586091 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|66, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 209720 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 75.97781105868862 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } dataWritten: 210444 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 83.77384564239721 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 83.77384564239721 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 101.960589257945 } ], shardId: "test.foo-a_83.77384564239721", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfd9
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|66||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672505), what: "split", ns: "test.foo", details: { before: { min: { a: 83.77384564239721 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 83.77384564239721 }, max: { a: 101.960589257945 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 101.960589257945 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 1|68||4fd97a3b0d2fef4d6a507be2 based on: 1|66||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 123.1918419151289 } on: { a: 101.960589257945 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|68, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 209875 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 75.8896351626982 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|54||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 378.3565272980204 } dataWritten: 210646 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 369.8464951515394 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|50||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 998.3975234740553 } dataWritten: 210248 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 983.4185361611541 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } dataWritten: 209993 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 581.1999942850578 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 938.1160661714987 } dataWritten: 209721 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 922.7447877922872 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|48||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 264.0825842924789 } dataWritten: 210734 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 250.2593971029685 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|49||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 964.9150523226922 } dataWritten: 209808 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 954.219592746076 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|44||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 456.4586339452165 } dataWritten: 210650 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 1|70||4fd97a3b0d2fef4d6a507be2 based on: 1|68||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|44||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 456.4586339452165 } on: { a: 427.2300955074828 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|70, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 192933 splitThreshold: 943718
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|48||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 264.0825842924789 } dataWritten: 210491 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 249.7990167662232 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 204.0577089538382 } dataWritten: 210609 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 1|72||4fd97a3b0d2fef4d6a507be2 based on: 1|70||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 204.0577089538382 } on: { a: 176.0230312595962 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|72, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|71||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 176.0230312595962 } dataWritten: 209858 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 175.9143368098498 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|42||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 714.0536251380356 } dataWritten: 210754 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 1|74||4fd97a3b0d2fef4d6a507be2 based on: 1|72||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|42||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 714.0536251380356 } on: { a: 694.6501944983177 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|74, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } dataWritten: 210446 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 580.3423007737663 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|35||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 840.7121644073931 } dataWritten: 210591 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 1|76||4fd97a3b0d2fef4d6a507be2 based on: 1|74||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|35||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 840.7121644073931 } on: { a: 815.7684070742035 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|76, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|47||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 233.8565055904641 } dataWritten: 209943 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 220.7232550467572 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } dataWritten: 210774 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 1|78||4fd97a3b0d2fef4d6a507be2 based on: 1|76||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|41||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 678.3563510786536 } on: { a: 657.3538695372831 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|78, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|76||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 840.7121644073931 } dataWritten: 209973 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 832.5424624464664 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 203167 splitThreshold: 943718
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 938.1160661714987 } dataWritten: 209925 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 921.7604672839924 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|33||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 498.2021416153332 } dataWritten: 210356 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 1|80||4fd97a3b0d2fef4d6a507be2 based on: 1|78||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|33||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 498.2021416153332 } on: { a: 473.1445991105042 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } dataWritten: 210580 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 673.6258206871569 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 882.331873780809 } dataWritten: 209931 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 43 version: 1|82||4fd97a3b0d2fef4d6a507be2 based on: 1|80||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:32 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 882.331873780809 } on: { a: 855.8703567421647 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|82, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:32 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|71||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 176.0230312595962 } dataWritten: 210366 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 174.8686677429014 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|57||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 563.897889911273 } dataWritten: 210244 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 557.8313673490627 }
m30999| Thu Jun 14 01:44:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|64||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 797.6352444405507 } dataWritten: 210206 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:32 [conn] chunk not full enough to trigger auto-split { a: 789.626144743071 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 327.5292321238884 } dataWritten: 210500 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 44 version: 1|84||4fd97a3b0d2fef4d6a507be2 based on: 1|82||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 327.5292321238884 } on: { a: 309.3101713472285 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|84, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|48||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 264.0825842924789 } dataWritten: 210557 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 45 version: 1|86||4fd97a3b0d2fef4d6a507be2 based on: 1|84||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|48||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 264.0825842924789 } on: { a: 248.3080159156712 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|86, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|37||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 752.6019558395919 } dataWritten: 209754 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 46 version: 1|88||4fd97a3b0d2fef4d6a507be2 based on: 1|86||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|37||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 752.6019558395919 } on: { a: 729.8361633348899 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|88, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|66||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 542.4296058071777 } dataWritten: 210515 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 531.4424969394696 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } dataWritten: 210023 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 472.1813004733815 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 209730 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 813.8380764874246 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 752.6019558395919 } dataWritten: 210494 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 743.5091937267264 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 209947 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 728.9024905795321 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } dataWritten: 209848 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 279.057256445801 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|60||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 640.7093733209429 } dataWritten: 210070 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 47 version: 1|90||4fd97a3b0d2fef4d6a507be2 based on: 1|88||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|60||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 640.7093733209429 } on: { a: 623.3985075048967 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|90, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|43||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 411.0287894698923 } dataWritten: 209756 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 48 version: 1|92||4fd97a3b0d2fef4d6a507be2 based on: 1|90||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|43||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 411.0287894698923 } on: { a: 392.8718206829087 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|92, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|56||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 57.56464668319472 } dataWritten: 210565 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 49 version: 1|94||4fd97a3b0d2fef4d6a507be2 based on: 1|92||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|56||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 57.56464668319472 } on: { a: 39.89992532263464 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|94, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } dataWritten: 209849 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 278.7164576146738 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|61||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 905.2934559328332 } dataWritten: 210004 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 897.4433620601043 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|55||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 25.60273139230473 } dataWritten: 210773 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 15.77777992986329 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|63||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 773.3799848158397 } dataWritten: 209743 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 767.4812024126988 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 210248 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 71.80130178748334 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|39||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 159.2125242384949 } dataWritten: 210189 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 50 version: 1|96||4fd97a3b0d2fef4d6a507be2 based on: 1|94||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|39||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 159.2125242384949 } on: { a: 136.5735165062921 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|96, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 411.0287894698923 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 411.0287894698923 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 427.2300955074828 } ], shardId: "test.foo-a_411.0287894698923", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfda
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|68||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672652), what: "split", ns: "test.foo", details: { before: { min: { a: 411.0287894698923 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 411.0287894698923 }, max: { a: 427.2300955074828 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 427.2300955074828 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 159.2125242384949 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 159.2125242384949 }, max: { a: 204.0577089538382 }, from: "shard0001", splitKeys: [ { a: 176.0230312595962 } ], shardId: "test.foo-a_159.2125242384949", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfdb
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|70||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672674), what: "split", ns: "test.foo", details: { before: { min: { a: 159.2125242384949 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 159.2125242384949 }, max: { a: 176.0230312595962 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 176.0230312595962 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 678.3563510786536 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 678.3563510786536 }, max: { a: 714.0536251380356 }, from: "shard0001", splitKeys: [ { a: 694.6501944983177 } ], shardId: "test.foo-a_678.3563510786536", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfdc
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|72||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672726), what: "split", ns: "test.foo", details: { before: { min: { a: 678.3563510786536 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 678.3563510786536 }, max: { a: 694.6501944983177 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 694.6501944983177 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 797.6352444405507 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 797.6352444405507 }, max: { a: 840.7121644073931 }, from: "shard0001", splitKeys: [ { a: 815.7684070742035 } ], shardId: "test.foo-a_797.6352444405507", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfdd
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|74||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672758), what: "split", ns: "test.foo", details: { before: { min: { a: 797.6352444405507 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 815.7684070742035 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 678.3563510786536 }, from: "shard0001", splitKeys: [ { a: 657.3538695372831 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfde
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|76||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672800), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 657.3538695372831 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 657.3538695372831 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 456.4586339452165 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 456.4586339452165 }, max: { a: 498.2021416153332 }, from: "shard0001", splitKeys: [ { a: 473.1445991105042 } ], shardId: "test.foo-a_456.4586339452165", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfdf
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|78||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672881), what: "split", ns: "test.foo", details: { before: { min: { a: 456.4586339452165 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 456.4586339452165 }, max: { a: 473.1445991105042 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 473.1445991105042 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:32 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 840.7121644073931 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:32 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 840.7121644073931 }, max: { a: 882.331873780809 }, from: "shard0001", splitKeys: [ { a: 855.8703567421647 } ], shardId: "test.foo-a_840.7121644073931", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:32 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4032a28802daeedfe0
m30001| Thu Jun 14 01:44:32 [conn4] splitChunk accepted at version 1|80||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:32 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:32-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652672916), what: "split", ns: "test.foo", details: { before: { min: { a: 840.7121644073931 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 840.7121644073931 }, max: { a: 855.8703567421647 }, lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 855.8703567421647 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:32 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:32 [conn4] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 294.0222214358918 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 294.0222214358918 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 309.3101713472285 } ], shardId: "test.foo-a_294.0222214358918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe1
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|82||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673004), what: "split", ns: "test.foo", details: { before: { min: { a: 294.0222214358918 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 233.8565055904641 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 264.0825842924789 }, from: "shard0001", splitKeys: [ { a: 248.3080159156712 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe2
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|84||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673019), what: "split", ns: "test.foo", details: { before: { min: { a: 233.8565055904641 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 248.3080159156712 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 714.0536251380356 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 714.0536251380356 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 729.8361633348899 } ], shardId: "test.foo-a_714.0536251380356", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe3
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|86||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673042), what: "split", ns: "test.foo", details: { before: { min: { a: 714.0536251380356 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 714.0536251380356 }, max: { a: 729.8361633348899 }, lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 729.8361633348899 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 610.6068178358934 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 610.6068178358934 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 623.3985075048967 } ], shardId: "test.foo-a_610.6068178358934", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe4
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|88||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673212), what: "split", ns: "test.foo", details: { before: { min: { a: 610.6068178358934 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 610.6068178358934 }, max: { a: 623.3985075048967 }, lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 378.3565272980204 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 378.3565272980204 }, max: { a: 411.0287894698923 }, from: "shard0001", splitKeys: [ { a: 392.8718206829087 } ], shardId: "test.foo-a_378.3565272980204", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe5
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|90||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673257), what: "split", ns: "test.foo", details: { before: { min: { a: 378.3565272980204 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 378.3565272980204 }, max: { a: 392.8718206829087 }, lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 392.8718206829087 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 25.60273139230473 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 25.60273139230473 }, max: { a: 57.56464668319472 }, from: "shard0001", splitKeys: [ { a: 39.89992532263464 } ], shardId: "test.foo-a_25.60273139230473", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe6
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|92||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673288), what: "split", ns: "test.foo", details: { before: { min: { a: 25.60273139230473 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 39.89992532263464 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 123.1918419151289 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 123.1918419151289 }, max: { a: 159.2125242384949 }, from: "shard0001", splitKeys: [ { a: 136.5735165062921 } ], shardId: "test.foo-a_123.1918419151289", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe7
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|94||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673387), what: "split", ns: "test.foo", details: { before: { min: { a: 123.1918419151289 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 136.5735165062921 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:33 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.3, filling with zeroes...
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|70||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 456.4586339452165 } dataWritten: 210668 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 427.2300955074828 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:33 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 427.2300955074828 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:33 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 427.2300955074828 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 441.0435238853461 } ], shardId: "test.foo-a_427.2300955074828", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:33 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4132a28802daeedfe8
m30001| Thu Jun 14 01:44:33 [conn4] splitChunk accepted at version 1|96||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:33 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:33-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652673749), what: "split", ns: "test.foo", details: { before: { min: { a: 427.2300955074828 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 427.2300955074828 }, max: { a: 441.0435238853461 }, lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:33 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:33 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 51 version: 1|98||4fd97a3b0d2fef4d6a507be2 based on: 1|96||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:33 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|70||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 456.4586339452165 } on: { a: 441.0435238853461 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|98, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:33 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|55||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 25.60273139230473 } dataWritten: 210557 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 25.60273139230473 }
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 15.3344573822175 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|72||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 204.0577089538382 } dataWritten: 210596 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 190.3993838950527 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } dataWritten: 210101 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 470.5610429948271 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|54||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 378.3565272980204 } dataWritten: 210724 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 366.4312700347027 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|57||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 563.897889911273 } dataWritten: 210662 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 555.67619117839 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } dataWritten: 210736 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 576.5831890834659 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:33 [conn4] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 752.6019558395919 }
m30999| Thu Jun 14 01:44:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 752.6019558395919 } dataWritten: 209973 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:33 [conn] chunk not full enough to trigger auto-split { a: 741.2350607358579 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|93||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 39.89992532263464 } dataWritten: 210368 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 38.61541634239796 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 610.6068178358934 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|59||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 610.6068178358934 } dataWritten: 210144 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 602.9509062145426 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|47||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 233.8565055904641 } dataWritten: 210488 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 204.0577089538382 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 204.0577089538382 }, max: { a: 233.8565055904641 }, from: "shard0001", splitKeys: [ { a: 216.8904302452864 } ], shardId: "test.foo-a_204.0577089538382", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfe9
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|98||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674051), what: "split", ns: "test.foo", details: { before: { min: { a: 204.0577089538382 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 204.0577089538382 }, max: { a: 216.8904302452864 }, lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 216.8904302452864 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 32ms sequenceNumber: 52 version: 1|100||4fd97a3b0d2fef4d6a507be2 based on: 1|98||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|47||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 233.8565055904641 } on: { a: 216.8904302452864 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|100, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 209732 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 423.7091644710037 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 938.1160661714987 } dataWritten: 210062 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 905.2934559328332 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 905.2934559328332 }, max: { a: 938.1160661714987 }, from: "shard0001", splitKeys: [ { a: 918.4259760765641 } ], shardId: "test.foo-a_905.2934559328332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfea
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|100||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674116), what: "split", ns: "test.foo", details: { before: { min: { a: 905.2934559328332 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 905.2934559328332 }, max: { a: 918.4259760765641 }, lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 918.4259760765641 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 53 version: 1|102||4fd97a3b0d2fef4d6a507be2 based on: 1|100||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 264.0825842924789 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 264.0825842924789 }, max: { a: 294.0222214358918 }, from: "shard0001", splitKeys: [ { a: 277.1560315461681 } ], shardId: "test.foo-a_264.0825842924789", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 938.1160661714987 } on: { a: 918.4259760765641 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|102, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|82||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 882.331873780809 } dataWritten: 210069 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 869.1683033109524 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 210185 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 70.52864556854831 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } dataWritten: 210209 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfeb
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|102||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674211), what: "split", ns: "test.foo", details: { before: { min: { a: 264.0825842924789 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 264.0825842924789 }, max: { a: 277.1560315461681 }, lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 277.1560315461681 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 54 version: 1|104||4fd97a3b0d2fef4d6a507be2 based on: 1|102||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 176.0230312595962 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 176.0230312595962 }, max: { a: 204.0577089538382 }, from: "shard0001", splitKeys: [ { a: 188.6698238706465 } ], shardId: "test.foo-a_176.0230312595962", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|45||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 294.0222214358918 } on: { a: 277.1560315461681 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|104, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } dataWritten: 210261 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 510.7676361428172 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 752.6019558395919 } dataWritten: 210442 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 740.8297165566304 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|72||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 204.0577089538382 } dataWritten: 209763 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfec
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|104||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674284), what: "split", ns: "test.foo", details: { before: { min: { a: 176.0230312595962 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 176.0230312595962 }, max: { a: 188.6698238706465 }, lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 188.6698238706465 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 55 version: 1|106||4fd97a3b0d2fef4d6a507be2 based on: 1|104||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|72||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 204.0577089538382 } on: { a: 188.6698238706465 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|106, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 216.8904302452864 }
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|99||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 216.8904302452864 } dataWritten: 210174 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 216.2714704020526 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 855.8703567421647 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 855.8703567421647 }, max: { a: 882.331873780809 }, from: "shard0001", splitKeys: [ { a: 868.5788679342879 } ], shardId: "test.foo-a_855.8703567421647", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfed
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|106||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674374), what: "split", ns: "test.foo", details: { before: { min: { a: 855.8703567421647 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 855.8703567421647 }, max: { a: 868.5788679342879 }, lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 868.5788679342879 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|82||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 882.331873780809 } dataWritten: 210709 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 56 version: 1|108||4fd97a3b0d2fef4d6a507be2 based on: 1|106||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|82||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 882.331873780809 } on: { a: 868.5788679342879 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|108, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } dataWritten: 210751 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 323.2757812641534 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } dataWritten: 209809 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 277.0515883610773 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } dataWritten: 210737 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 918.1290587479152 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 136.5735165062921 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 210642 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 423.0537179064901 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|61||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 905.2934559328332 } dataWritten: 210204 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 894.5830083522684 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } dataWritten: 210403 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 669.2431627031535 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|93||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 39.89992532263464 } dataWritten: 209992 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 37.56467027699972 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|95||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 136.5735165062921 } dataWritten: 210579 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 134.6047919867972 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 964.9150523226922 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 964.9150523226922 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 977.1164746659301 } ], shardId: "test.foo-a_964.9150523226922", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfee
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|108||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674684), what: "split", ns: "test.foo", details: { before: { min: { a: 964.9150523226922 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 964.9150523226922 }, max: { a: 977.1164746659301 }, lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 977.1164746659301 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 427.2300955074828 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 815.7684070742035 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 815.7684070742035 }, max: { a: 840.7121644073931 }, from: "shard0001", splitKeys: [ { a: 827.5642418995561 } ], shardId: "test.foo-a_815.7684070742035", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedfef
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|110||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674784), what: "split", ns: "test.foo", details: { before: { min: { a: 815.7684070742035 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 815.7684070742035 }, max: { a: 827.5642418995561 }, lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 827.5642418995561 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 773.3799848158397 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 773.3799848158397 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 784.2714953599016 } ], shardId: "test.foo-a_773.3799848158397", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedff0
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|112||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674871), what: "split", ns: "test.foo", details: { before: { min: { a: 773.3799848158397 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 784.2714953599016 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:44:34 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:44:34 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 25.60273139230473 }, from: "shard0001", splitKeys: [ { a: 12.55217658236718 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:34 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4232a28802daeedff1
m30001| Thu Jun 14 01:44:34 [conn4] splitChunk accepted at version 1|114||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:34 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:34-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652674888), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 12.55217658236718 }, lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:34 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 123.1918419151289 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|91||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 392.8718206829087 } dataWritten: 210198 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 389.7251766928107 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 210603 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 634.3433714479373 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|50||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 998.3975234740553 } dataWritten: 209863 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 57 version: 1|110||4fd97a3b0d2fef4d6a507be2 based on: 1|108||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|50||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 998.3975234740553 } on: { a: 977.1164746659301 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|110, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 233.8565055904641 } dataWritten: 209783 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 229.0101682285016 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 210417 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 422.3165769021969 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|97||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 441.0435238853461 } dataWritten: 210537 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 438.1726121704319 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|110||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 998.3975234740553 } dataWritten: 209743 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 988.4819556610581 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 209811 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 633.8548863298807 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|76||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 840.7121644073931 } dataWritten: 209947 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 58 version: 1|112||4fd97a3b0d2fef4d6a507be2 based on: 1|110||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|76||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 840.7121644073931 } on: { a: 827.5642418995561 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|112, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } dataWritten: 209972 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 453.3780111841101 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 233.8565055904641 } dataWritten: 209970 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 228.6516345283594 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|64||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 797.6352444405507 } dataWritten: 210002 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 59 version: 1|114||4fd97a3b0d2fef4d6a507be2 based on: 1|112||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|64||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 797.6352444405507 } on: { a: 784.2714953599016 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|114, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|86||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 264.0825842924789 } dataWritten: 210272 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 259.0226022142432 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } dataWritten: 209933 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 668.01056947355 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|55||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 25.60273139230473 } dataWritten: 210652 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 60 version: 1|116||4fd97a3b0d2fef4d6a507be2 based on: 1|114||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:34 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|55||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 25.60273139230473 } on: { a: 12.55217658236718 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|116, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:34 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|68||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 123.1918419151289 } dataWritten: 210217 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 112.3249507419898 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 233.8565055904641 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 233.8565055904641 } dataWritten: 209735 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 228.4906364748741 }
m30001| Thu Jun 14 01:44:34 [conn4] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:44:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 210412 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:34 [conn] chunk not full enough to trigger auto-split { a: 633.4064006845748 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 353.2720479801309 }, from: "shard0001", splitKeys: [ { a: 337.6965417950217 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff2
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|116||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675017), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 337.6965417950217 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 773.3799848158397 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|53||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 353.2720479801309 } dataWritten: 209779 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 61 version: 1|118||4fd97a3b0d2fef4d6a507be2 based on: 1|116||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|53||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 353.2720479801309 } on: { a: 337.6965417950217 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|118, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 210156 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 808.7611254605828 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|63||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 773.3799848158397 } dataWritten: 209745 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 763.320232140777 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 840.7121644073931 } dataWritten: 210706 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 838.6147150797694 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 210456 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 725.0103700450944 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 515.6449770586091 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 515.6449770586091 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 526.919018850918 } ], shardId: "test.foo-a_515.6449770586091", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff3
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|118||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675164), what: "split", ns: "test.foo", details: { before: { min: { a: 515.6449770586091 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 515.6449770586091 }, max: { a: 526.919018850918 }, lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 39.89992532263464 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 938.1160661714987 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 938.1160661714987 }, max: { a: 964.9150523226922 }, from: "shard0001", splitKeys: [ { a: 948.0165404542549 } ], shardId: "test.foo-a_938.1160661714987", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff4
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|120||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-60", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675223), what: "split", ns: "test.foo", details: { before: { min: { a: 938.1160661714987 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 938.1160661714987 }, max: { a: 948.0165404542549 }, lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 948.0165404542549 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 136.5735165062921 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 136.5735165062921 }, max: { a: 159.2125242384949 }, from: "shard0001", splitKeys: [ { a: 146.6503611644078 } ], shardId: "test.foo-a_136.5735165062921", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff5
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|122||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-61", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675229), what: "split", ns: "test.foo", details: { before: { min: { a: 136.5735165062921 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 12.55217658236718 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 353.2720479801309 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 353.2720479801309 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 363.6779080113047 } ], shardId: "test.foo-a_353.2720479801309", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff6
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|124||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-62", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675322), what: "split", ns: "test.foo", details: { before: { min: { a: 353.2720479801309 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 353.2720479801309 }, max: { a: 363.6779080113047 }, lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 210457 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 633.3046957086054 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|66||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 542.4296058071777 } dataWritten: 210545 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 62 version: 1|120||4fd97a3b0d2fef4d6a507be2 based on: 1|118||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|66||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 542.4296058071777 } on: { a: 526.919018850918 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|120, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|94||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 57.56464668319472 } dataWritten: 209952 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 50.02505730712426 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|49||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 964.9150523226922 } dataWritten: 210734 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 63 version: 1|122||4fd97a3b0d2fef4d6a507be2 based on: 1|120||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|49||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 964.9150523226922 } on: { a: 948.0165404542549 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|122, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|96||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 159.2125242384949 } dataWritten: 210772 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 64 version: 1|124||4fd97a3b0d2fef4d6a507be2 based on: 1|122||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|96||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 159.2125242384949 } on: { a: 146.6503611644078 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|124, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } dataWritten: 210246 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 321.0326618484959 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } dataWritten: 209838 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 667.5360610890879 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|116||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 25.60273139230473 } dataWritten: 210384 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 23.89979387907493 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 840.7121644073931 } dataWritten: 210097 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 838.3983116351068 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 209847 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 808.4238882917007 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|54||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 378.3565272980204 } dataWritten: 209874 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 65 version: 1|126||4fd97a3b0d2fef4d6a507be2 based on: 1|124||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|54||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 378.3565272980204 } on: { a: 363.6779080113047 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|126, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 210014 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 724.5702771808903 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 353.2720479801309 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|118||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 353.2720479801309 } dataWritten: 210092 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 348.0410606554206 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 473.1445991105042 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 473.1445991105042 }, max: { a: 498.2021416153332 }, from: "shard0001", splitKeys: [ { a: 483.6281235892167 } ], shardId: "test.foo-a_473.1445991105042", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff7
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|126||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-63", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675455), what: "split", ns: "test.foo", details: { before: { min: { a: 473.1445991105042 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 483.6281235892167 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 188.6698238706465 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 542.4296058071777 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 542.4296058071777 }, max: { a: 563.897889911273 }, from: "shard0001", splitKeys: [ { a: 552.1925267328988 } ], shardId: "test.foo-a_542.4296058071777", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff8
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|128||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-64", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675535), what: "split", ns: "test.foo", details: { before: { min: { a: 542.4296058071777 }, max: { a: 563.897889911273 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 552.1925267328988 }, max: { a: 563.897889911273 }, lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 427.2300955074828 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 729.8361633348899 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 729.8361633348899 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 738.6198156338151 } ], shardId: "test.foo-a_729.8361633348899", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedff9
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|130||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-65", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675627), what: "split", ns: "test.foo", details: { before: { min: { a: 729.8361633348899 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 882.331873780809 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 882.331873780809 }, max: { a: 905.2934559328332 }, from: "shard0001", splitKeys: [ { a: 891.8750702869381 } ], shardId: "test.foo-a_882.331873780809", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedffa
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|132||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-66", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675695), what: "split", ns: "test.foo", details: { before: { min: { a: 882.331873780809 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 891.8750702869381 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 101.960589257945 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 101.960589257945 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 111.0431509615952 } ], shardId: "test.foo-a_101.960589257945", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedffb
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|134||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-67", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675750), what: "split", ns: "test.foo", details: { before: { min: { a: 101.960589257945 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 337.6965417950217 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 111.0431509615952 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 526.919018850918 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 918.4259760765641 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 918.4259760765641 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 918.4259760765641 }, max: { a: 938.1160661714987 }, from: "shard0001", splitKeys: [ { a: 927.6813889109981 } ], shardId: "test.foo-a_918.4259760765641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedffc
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|136||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-68", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675889), what: "split", ns: "test.foo", details: { before: { min: { a: 918.4259760765641 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 927.6813889109981 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 83.77384564239721 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 83.77384564239721 }, max: { a: 101.960589257945 }, from: "shard0001", splitKeys: [ { a: 92.91917824556573 } ], shardId: "test.foo-a_83.77384564239721", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedffd
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|138||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-69", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675916), what: "split", ns: "test.foo", details: { before: { min: { a: 83.77384564239721 }, max: { a: 101.960589257945 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:44:35 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 590.8997745355827 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:44:35 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 590.8997745355827 }, max: { a: 610.6068178358934 }, from: "shard0001", splitKeys: [ { a: 599.2155367136296 } ], shardId: "test.foo-a_590.8997745355827", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:35 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4332a28802daeedffe
m30001| Thu Jun 14 01:44:35 [conn4] splitChunk accepted at version 1|140||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:35 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:35-70", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652675983), what: "split", ns: "test.foo", details: { before: { min: { a: 590.8997745355827 }, max: { a: 610.6068178358934 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:35 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:35 [conn4] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 752.6019558395919 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 752.6019558395919 }, max: { a: 773.3799848158397 }, from: "shard0001", splitKeys: [ { a: 761.349721153896 } ], shardId: "test.foo-a_752.6019558395919", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeedfff
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|142||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-71", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676039), what: "split", ns: "test.foo", details: { before: { min: { a: 752.6019558395919 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 57.56464668319472 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 57.56464668319472 }, max: { a: 83.77384564239721 }, from: "shard0001", splitKeys: [ { a: 66.37486853611429 } ], shardId: "test.foo-a_57.56464668319472", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee000
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|144||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-72", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676140), what: "split", ns: "test.foo", details: { before: { min: { a: 57.56464668319472 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 66.37486853611429 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.3, size: 128MB, took 2.728 secs
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 599.2155367136296 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 111.0431509615952 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 146.6503611644078 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|80||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 498.2021416153332 } dataWritten: 210703 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 66 version: 1|128||4fd97a3b0d2fef4d6a507be2 based on: 1|126||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|80||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 498.2021416153332 } on: { a: 483.6281235892167 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|128, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|105||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 188.6698238706465 } dataWritten: 210738 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 186.2104195691594 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|67||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 101.960589257945 } dataWritten: 209794 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 93.71816067892058 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|57||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 563.897889911273 } dataWritten: 210486 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 67 version: 1|130||4fd97a3b0d2fef4d6a507be2 based on: 1|128||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|57||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 563.897889911273 } on: { a: 552.1925267328988 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|130, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|81||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 855.8703567421647 } dataWritten: 210695 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 850.9874355947401 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } dataWritten: 209885 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 274.559018368476 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } dataWritten: 210542 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 826.1235407005589 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } dataWritten: 210290 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 465.9528518679106 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|97||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 441.0435238853461 } dataWritten: 209725 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 436.9555901244864 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 210588 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 724.1328806624152 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210268 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 483.1435864801863 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } dataWritten: 209959 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 452.2973526208728 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 752.6019558395919 } dataWritten: 210694 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 68 version: 1|132||4fd97a3b0d2fef4d6a507be2 based on: 1|130||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 752.6019558395919 } on: { a: 738.6198156338151 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|132, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 209875 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 724.0081502304952 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 378.3565272980204 } dataWritten: 209951 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 372.8453613491845 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|83||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 309.3101713472285 } dataWritten: 210638 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 302.840355889751 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } dataWritten: 210688 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 825.8717902868838 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } dataWritten: 210014 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 507.8862565270078 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 210248 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 975.213390556916 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|61||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 905.2934559328332 } dataWritten: 210548 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 69 version: 1|134||4fd97a3b0d2fef4d6a507be2 based on: 1|132||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|61||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 905.2934559328332 } on: { a: 891.8750702869381 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|134, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|81||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 855.8703567421647 } dataWritten: 210027 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 850.5492393872331 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 210091 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 420.5188810764289 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 905.2934559328332 } dataWritten: 209789 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 902.2862887494595 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|68||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 123.1918419151289 } dataWritten: 210764 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 70 version: 1|136||4fd97a3b0d2fef4d6a507be2 based on: 1|134||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|68||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 123.1918419151289 } on: { a: 111.0431509615952 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|136, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|117||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 337.6965417950217 } dataWritten: 209872 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 336.5767905917445 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|136||000000000000000000000000 min: { a: 111.0431509615952 } max: { a: 123.1918419151289 } dataWritten: 210247 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 121.5599040260938 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|77||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 657.3538695372831 } dataWritten: 209836 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 650.8007698085491 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|104||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 294.0222214358918 } dataWritten: 210608 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 287.3703930472468 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|125||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 363.6779080113047 } dataWritten: 210765 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 362.6963514537844 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|89||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 623.3985075048967 } dataWritten: 209940 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 618.6597815008646 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|95||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 136.5735165062921 } dataWritten: 210228 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 132.0752115084287 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } dataWritten: 209781 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 507.6017431566926 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|92||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 411.0287894698923 } dataWritten: 210466 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 402.5025090190456 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|119||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 526.919018850918 } dataWritten: 210179 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 525.1603360895757 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|102||000000000000000000000000 min: { a: 918.4259760765641 } max: { a: 938.1160661714987 } dataWritten: 210376 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 71 version: 1|138||4fd97a3b0d2fef4d6a507be2 based on: 1|136||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|102||000000000000000000000000 min: { a: 918.4259760765641 } max: { a: 938.1160661714987 } on: { a: 927.6813889109981 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|138, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|67||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 101.960589257945 } dataWritten: 210334 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 72 version: 1|140||4fd97a3b0d2fef4d6a507be2 based on: 1|138||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|67||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 101.960589257945 } on: { a: 92.91917824556573 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|140, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 210713 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 807.4105833931693 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } dataWritten: 210046 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 319.5076728362091 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 204.0577089538382 } dataWritten: 210381 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 198.0249077363652 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } dataWritten: 209715 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 464.9710230713899 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 209757 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 419.9018097646356 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|59||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 610.6068178358934 } dataWritten: 210747 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 73 version: 1|142||4fd97a3b0d2fef4d6a507be2 based on: 1|140||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:35 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|59||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 610.6068178358934 } on: { a: 599.2155367136296 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|142, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:35 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:35 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|104||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 294.0222214358918 } dataWritten: 210738 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:35 [conn] chunk not full enough to trigger auto-split { a: 287.1335552588727 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 210388 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 807.2931916192827 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210537 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 482.4013460711474 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } dataWritten: 210277 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 507.3145711940267 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|63||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 773.3799848158397 } dataWritten: 210613 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 74 version: 1|144||4fd97a3b0d2fef4d6a507be2 based on: 1|142||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|63||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 773.3799848158397 } on: { a: 761.349721153896 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|144, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } dataWritten: 209732 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 242.2857266310467 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|91||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 392.8718206829087 } dataWritten: 210310 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 387.1832167386808 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|121||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 948.0165404542549 } dataWritten: 210354 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 946.8908271211652 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 210574 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 974.3653984549803 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } dataWritten: 209944 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 75 version: 1|146||4fd97a3b0d2fef4d6a507be2 based on: 1|144||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|51||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 83.77384564239721 } on: { a: 66.37486853611429 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|146, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 905.2934559328332 } dataWritten: 210291 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 901.1874382395616 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|141||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 599.2155367136296 } dataWritten: 210679 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 599.0117621885574 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|135||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 111.0431509615952 } dataWritten: 210720 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 110.1991050918688 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|123||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 146.6503611644078 } dataWritten: 210255 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 144.805347280861 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|128||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 498.2021416153332 } dataWritten: 210607 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 498.2021416153332 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 492.2054654709654 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|95||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 136.5735165062921 } dataWritten: 209874 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 136.5735165062921 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 131.2164024344422 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 39.89992532263464 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 39.89992532263464 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 39.89992532263464 }, max: { a: 57.56464668319472 }, from: "shard0001", splitKeys: [ { a: 47.94081917961535 } ], shardId: "test.foo-a_39.89992532263464", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee001
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|146||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-73", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676249), what: "split", ns: "test.foo", details: { before: { min: { a: 39.89992532263464 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 111.0431509615952 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 694.6501944983177 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 694.6501944983177 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 694.6501944983177 }, max: { a: 714.0536251380356 }, from: "shard0001", splitKeys: [ { a: 703.7520953686671 } ], shardId: "test.foo-a_694.6501944983177", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee002
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|148||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-74", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676284), what: "split", ns: "test.foo", details: { before: { min: { a: 694.6501944983177 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 703.7520953686671 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 563.897889911273 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 563.897889911273 }, max: { a: 590.8997745355827 }, from: "shard0001", splitKeys: [ { a: 571.914212129846 } ], shardId: "test.foo-a_563.897889911273", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee003
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|150||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-75", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676344), what: "split", ns: "test.foo", details: { before: { min: { a: 563.897889911273 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 571.914212129846 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 977.1164746659301 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 977.1164746659301 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 985.6773819217475 } ], shardId: "test.foo-a_977.1164746659301", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee004
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|152||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-76", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676368), what: "split", ns: "test.foo", details: { before: { min: { a: 977.1164746659301 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 985.6773819217475 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|94||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 57.56464668319472 } dataWritten: 209959 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 76 version: 1|148||4fd97a3b0d2fef4d6a507be2 based on: 1|146||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|94||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 57.56464668319472 } on: { a: 47.94081917961535 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|148, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|136||000000000000000000000000 min: { a: 111.0431509615952 } max: { a: 123.1918419151289 } dataWritten: 209923 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 120.3538428674498 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|74||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 714.0536251380356 } dataWritten: 210764 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 77 version: 1|150||4fd97a3b0d2fef4d6a507be2 based on: 1|148||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|74||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 714.0536251380356 } on: { a: 703.7520953686671 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|150, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|83||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 309.3101713472285 } dataWritten: 210119 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 301.8783272597248 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|125||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 363.6779080113047 } dataWritten: 210120 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 361.7829435449319 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } dataWritten: 210007 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 78 version: 1|152||4fd97a3b0d2fef4d6a507be2 based on: 1|150||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 590.8997745355827 } on: { a: 571.914212129846 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|152, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|110||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 998.3975234740553 } dataWritten: 209934 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 79 version: 1|154||4fd97a3b0d2fef4d6a507be2 based on: 1|152||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|110||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 998.3975234740553 } on: { a: 985.6773819217475 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|154, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } dataWritten: 210543 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 936.2025448845045 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 204.0577089538382 } dataWritten: 210361 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 197.2759644683415 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|130||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 563.897889911273 } dataWritten: 209755 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 563.897889911273 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 560.3850362162826 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } dataWritten: 210701 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 498.2021416153332 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 498.2021416153332 }, max: { a: 515.6449770586091 }, from: "shard0001", splitKeys: [ { a: 506.5947777056855 } ], shardId: "test.foo-a_498.2021416153332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee005
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|154||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-77", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676441), what: "split", ns: "test.foo", details: { before: { min: { a: 498.2021416153332 }, max: { a: 515.6449770586091 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 80 version: 1|156||4fd97a3b0d2fef4d6a507be2 based on: 1|154||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|65||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 515.6449770586091 } on: { a: 506.5947777056855 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|156, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|130||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 563.897889911273 } dataWritten: 210321 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 563.897889911273 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 560.337938680365 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|149||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 703.7520953686671 } dataWritten: 210774 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 694.6501944983177 } -->> { : 703.7520953686671 }
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 703.5679884070885 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 571.914212129846 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 571.914212129846 }, max: { a: 590.8997745355827 }, from: "shard0001", splitKeys: [ { a: 580.4600029065366 } ], shardId: "test.foo-a_571.914212129846", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee006
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|156||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-78", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676518), what: "split", ns: "test.foo", details: { before: { min: { a: 571.914212129846 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 948.0165404542549 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 948.0165404542549 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 948.0165404542549 }, max: { a: 964.9150523226922 }, from: "shard0001", splitKeys: [ { a: 955.9182567868356 } ], shardId: "test.foo-a_948.0165404542549", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee007
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|158||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-79", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676529), what: "split", ns: "test.foo", details: { before: { min: { a: 948.0165404542549 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 216.8904302452864 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 216.8904302452864 }, max: { a: 233.8565055904641 }, from: "shard0001", splitKeys: [ { a: 225.5962198744838 } ], shardId: "test.foo-a_216.8904302452864", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee008
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|160||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-80", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676538), what: "split", ns: "test.foo", details: { before: { min: { a: 216.8904302452864 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 159.2125242384949 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 159.2125242384949 }, max: { a: 176.0230312595962 }, from: "shard0001", splitKeys: [ { a: 167.6382092456179 } ], shardId: "test.foo-a_159.2125242384949", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee009
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|162||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-81", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676666), what: "split", ns: "test.foo", details: { before: { min: { a: 159.2125242384949 }, max: { a: 176.0230312595962 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 955.9182567868356 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 225.5962198744838 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 66.37486853611429 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 66.37486853611429 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 66.37486853611429 }, max: { a: 83.77384564239721 }, from: "shard0001", splitKeys: [ { a: 74.43717892117874 } ], shardId: "test.foo-a_66.37486853611429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee00a
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|164||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-82", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676743), what: "split", ns: "test.foo", details: { before: { min: { a: 66.37486853611429 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 111.0431509615952 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 714.0536251380356 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 714.0536251380356 }, max: { a: 729.8361633348899 }, from: "shard0001", splitKeys: [ { a: 721.9923962351373 } ], shardId: "test.foo-a_714.0536251380356", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee00b
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|166||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-83", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676854), what: "split", ns: "test.foo", details: { before: { min: { a: 714.0536251380356 }, max: { a: 729.8361633348899 }, lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 392.8718206829087 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 392.8718206829087 }, max: { a: 411.0287894698923 }, from: "shard0001", splitKeys: [ { a: 400.6101810646703 } ], shardId: "test.foo-a_392.8718206829087", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee00c
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|168||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-84", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676961), what: "split", ns: "test.foo", details: { before: { min: { a: 392.8718206829087 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 337.6965417950217 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 337.6965417950217 }, max: { a: 353.2720479801309 }, from: "shard0001", splitKeys: [ { a: 344.8762285660836 } ], shardId: "test.foo-a_337.6965417950217", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee00d
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|170||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-85", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676982), what: "split", ns: "test.foo", details: { before: { min: { a: 337.6965417950217 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:36 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:36 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 657.3538695372831 }, from: "shard0001", splitKeys: [ { a: 648.6747268265868 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:36 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4432a28802daeee00e
m30001| Thu Jun 14 01:44:36 [conn4] splitChunk accepted at version 1|172||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:36 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:36-86", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652676989), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 657.3538695372831 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:36 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 146.6503611644078 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:44:36 [conn4] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:44:37 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 840.7121644073931 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:44:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 840.7121644073931 }, max: { a: 855.8703567421647 }, from: "shard0001", splitKeys: [ { a: 848.2332478721062 } ], shardId: "test.foo-a_840.7121644073931", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee00f
m30001| Thu Jun 14 01:44:37 [conn4] splitChunk accepted at version 1|174||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-87", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677011), what: "split", ns: "test.foo", details: { before: { min: { a: 840.7121644073931 }, max: { a: 855.8703567421647 }, lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 891.8750702869381 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:37 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 277.1560315461681 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 277.1560315461681 }, max: { a: 294.0222214358918 }, from: "shard0001", splitKeys: [ { a: 284.9747465988205 } ], shardId: "test.foo-a_277.1560315461681", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee010
m30001| Thu Jun 14 01:44:37 [conn4] splitChunk accepted at version 1|176||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-88", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677085), what: "split", ns: "test.foo", details: { before: { min: { a: 277.1560315461681 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:44:37 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 678.3563510786536 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:44:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 678.3563510786536 }, max: { a: 694.6501944983177 }, from: "shard0001", splitKeys: [ { a: 685.0292821001574 } ], shardId: "test.foo-a_678.3563510786536", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee011
m30001| Thu Jun 14 01:44:37 [conn4] splitChunk accepted at version 1|178||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-89", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677103), what: "split", ns: "test.foo", details: { before: { min: { a: 678.3563510786536 }, max: { a: 694.6501944983177 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 210262 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 722.4827725000639 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } dataWritten: 210672 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 936.0667537270348 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } dataWritten: 210001 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 914.2203758556967 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|86||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 264.0825842924789 } dataWritten: 210223 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 256.1186333994808 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|152||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 590.8997745355827 } dataWritten: 209769 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 81 version: 1|158||4fd97a3b0d2fef4d6a507be2 based on: 1|156||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|152||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 590.8997745355827 } on: { a: 580.4600029065366 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|158, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|113||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 784.2714953599016 } dataWritten: 209897 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 781.2252317586103 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|122||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 964.9150523226922 } dataWritten: 210618 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 82 version: 1|160||4fd97a3b0d2fef4d6a507be2 based on: 1|158||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|122||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 964.9150523226922 } on: { a: 955.9182567868356 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|160, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 233.8565055904641 } dataWritten: 210228 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 83 version: 1|162||4fd97a3b0d2fef4d6a507be2 based on: 1|160||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 233.8565055904641 } on: { a: 225.5962198744838 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|162, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 210529 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 985.5204567006224 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } dataWritten: 209931 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 506.4301040765596 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } dataWritten: 210300 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 241.6478555897181 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|154||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 998.3975234740553 } dataWritten: 210181 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 994.3859050220345 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|131||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 738.6198156338151 } dataWritten: 210312 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 737.2373307707888 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|115||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } dataWritten: 210611 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 8.736421134311501 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 210188 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 722.2435635532759 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } dataWritten: 210396 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 450.3246702162752 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|71||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 176.0230312595962 } dataWritten: 210331 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 84 version: 1|164||4fd97a3b0d2fef4d6a507be2 based on: 1|162||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|71||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 176.0230312595962 } on: { a: 167.6382092456179 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|164, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|160||000000000000000000000000 min: { a: 955.9182567868356 } max: { a: 964.9150523226922 } dataWritten: 210295 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 963.9304285263914 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|162||000000000000000000000000 min: { a: 225.5962198744838 } max: { a: 233.8565055904641 } dataWritten: 210150 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 233.2687692959681 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210341 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 481.2832737773074 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } dataWritten: 209834 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 824.1147679790258 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|129||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 552.1925267328988 } dataWritten: 210748 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 550.0613149583567 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|146||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 83.77384564239721 } dataWritten: 210517 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 85 version: 1|166||4fd97a3b0d2fef4d6a507be2 based on: 1|164||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|146||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 83.77384564239721 } on: { a: 74.43717892117874 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|166, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } dataWritten: 210723 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 167.4529705647482 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } dataWritten: 209752 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 505.9819317290604 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } dataWritten: 210612 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 935.7666850206903 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 204.0577089538382 } dataWritten: 210658 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 196.565602147657 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|135||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 111.0431509615952 } dataWritten: 210651 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 109.7396882816438 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 192033 splitThreshold: 943718
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } dataWritten: 209967 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 86 version: 1|168||4fd97a3b0d2fef4d6a507be2 based on: 1|166||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|87||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 729.8361633348899 } on: { a: 721.9923962351373 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|168, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } dataWritten: 210302 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 505.7389617226925 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|108||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 882.331873780809 } dataWritten: 209855 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 876.3176880954276 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 378.3565272980204 } dataWritten: 210779 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 371.0042032818404 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|92||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 411.0287894698923 } dataWritten: 210021 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 87 version: 1|170||4fd97a3b0d2fef4d6a507be2 based on: 1|168||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|92||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 411.0287894698923 } on: { a: 400.6101810646703 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|170, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|118||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 353.2720479801309 } dataWritten: 210091 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 88 version: 1|172||4fd97a3b0d2fef4d6a507be2 based on: 1|170||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|118||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 353.2720479801309 } on: { a: 344.8762285660836 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|172, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|77||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 657.3538695372831 } dataWritten: 209905 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 89 version: 1|174||4fd97a3b0d2fef4d6a507be2 based on: 1|172||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:36 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|77||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 657.3538695372831 } on: { a: 648.6747268265868 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|174, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:36 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|124||000000000000000000000000 min: { a: 146.6503611644078 } max: { a: 159.2125242384949 } dataWritten: 210571 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:36 [conn] chunk not full enough to trigger auto-split { a: 153.8734034997404 }
m30999| Thu Jun 14 01:44:36 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|115||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } dataWritten: 209837 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 8.159855525118997 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|81||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 855.8703567421647 } dataWritten: 209790 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 90 version: 1|176||4fd97a3b0d2fef4d6a507be2 based on: 1|174||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|81||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 855.8703567421647 } on: { a: 848.2332478721062 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|176, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|108||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 882.331873780809 } dataWritten: 210088 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 876.1019176650573 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210740 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 480.5269724612843 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 193961 splitThreshold: 943718
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|133||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 891.8750702869381 } dataWritten: 209900 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 889.7125475759419 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|99||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 216.8904302452864 } dataWritten: 210434 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 211.4069559531208 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|104||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 294.0222214358918 } dataWritten: 210597 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 91 version: 1|178||4fd97a3b0d2fef4d6a507be2 based on: 1|176||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|104||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 294.0222214358918 } on: { a: 284.9747465988205 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|178, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 209727 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 759.8028711975353 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|73||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 694.6501944983177 } dataWritten: 210583 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 92 version: 1|180||4fd97a3b0d2fef4d6a507be2 based on: 1|178||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|73||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 694.6501944983177 } on: { a: 685.0292821001574 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|180, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:37 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 657.3538695372831 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:44:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 657.3538695372831 }, max: { a: 678.3563510786536 }, from: "shard0001", splitKeys: [ { a: 664.5574284897642 } ], shardId: "test.foo-a_657.3538695372831", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee012
m30001| Thu Jun 14 01:44:37 [conn4] splitChunk accepted at version 1|180||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-90", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677309), what: "split", ns: "test.foo", details: { before: { min: { a: 657.3538695372831 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:44:37 [conn4] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:37 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 456.4586339452165 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:37 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 456.4586339452165 }, max: { a: 473.1445991105042 }, from: "shard0001", splitKeys: [ { a: 463.2766201180535 } ], shardId: "test.foo-a_456.4586339452165", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee013
m30001| Thu Jun 14 01:44:37 [initandlisten] connection accepted from 127.0.0.1:48976 #5 (5 connections now open)
m30001| Thu Jun 14 01:44:37 [conn4] splitChunk accepted at version 1|182||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } dataWritten: 210764 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 271.6972193772529 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|128||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 498.2021416153332 } dataWritten: 209820 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 490.8698922127193 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 209804 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 972.267382785289 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } dataWritten: 210592 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 400.4052533202163 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } dataWritten: 209866 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 93 version: 1|182||4fd97a3b0d2fef4d6a507be2 based on: 1|180||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 678.3563510786536 } on: { a: 664.5574284897642 } (splitThreshold 1048576)
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-91", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677324), what: "split", ns: "test.foo", details: { before: { min: { a: 456.4586339452165 }, max: { a: 473.1445991105042 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|182, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 209764 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 984.5546323036632 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } dataWritten: 209898 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:37 [Balancer] connected connection!
m30999| Thu Jun 14 01:44:37 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:44:37 [Balancer] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:37 [Balancer] connected connection!
m30999| Thu Jun 14 01:44:37 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:37 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a450d2fef4d6a507be3" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a3b0d2fef4d6a507be1" } }
m30000| Thu Jun 14 01:44:37 [initandlisten] connection accepted from 127.0.0.1:60400 #10 (10 connections now open)
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:37 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a450d2fef4d6a507be3
m30999| Thu Jun 14 01:44:37 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:44:37 [conn] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:37 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:37 [conn] connected connection!
m30000| Thu Jun 14 01:44:37 [initandlisten] connection accepted from 127.0.0.1:60402 #11 (11 connections now open)
m30999| Thu Jun 14 01:44:37 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:44:37 [Balancer] shard0000 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:37 [Balancer] shard0001 maxSize: 0 currSize: 128 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:37 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:44:37 [Balancer] shard0000
m30999| Thu Jun 14 01:44:37 [Balancer] shard0001
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 12.55217658236718 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30001| Thu Jun 14 01:44:37 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey }, max: { a: 0.07367152018367129 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKey", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30001| Thu Jun 14 01:44:37 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4532a28802daeee014
m30001| Thu Jun 14 01:44:37 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:37-92", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652677329), what: "moveChunk.start", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.07367152018367129 }, from: "shard0001", to: "shard0000" } }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30001| Thu Jun 14 01:44:37 [conn4] moveChunk request accepted at version 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30001| Thu Jun 14 01:44:37 [conn4] moveChunk number of documents: 5
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] ----
m30999| Thu Jun 14 01:44:37 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:44:37 [Balancer] donor : 93 chunks on shard0001
m30999| Thu Jun 14 01:44:37 [Balancer] receiver : 0 chunks on shard0000
m30999| Thu Jun 14 01:44:37 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_MinKey", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:37 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|11||000000000000000000000000 min: { a: MinKey } max: { a: 0.07367152018367129 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30999| Thu Jun 14 01:44:37 [conn] ChunkManager: time to load chunks for test.foo: 6ms sequenceNumber: 94 version: 1|184||4fd97a3b0d2fef4d6a507be2 based on: 1|182||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:37 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|79||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 473.1445991105042 } on: { a: 463.2766201180535 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:37 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 209978 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:37 [conn5] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:37 [conn5] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:37 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:37 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 370.6941048296449 } ], shardId: "test.foo-a_363.6779080113047", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 972.1418979582137 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 378.3565272980204 } dataWritten: 210193 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:37 [conn5] request split points lookup for chunk test.foo { : 74.43717892117874 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:44:37 [initandlisten] connection accepted from 127.0.0.1:48979 #6 (6 connections now open)
m30000| Thu Jun 14 01:44:37 [initandlisten] connection accepted from 127.0.0.1:60403 #12 (12 connections now open)
m30999| Thu Jun 14 01:44:37 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 370.6941048296449 } ], shardId: "test.foo-a_363.6779080113047", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30001| Thu Jun 14 01:44:37 [conn5] request split points lookup for chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:37 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:37 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 533.8202672992276 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:37 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|166||000000000000000000000000 min: { a: 74.43717892117874 } max: { a: 83.77384564239721 } dataWritten: 209923 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:37 [conn] chunk not full enough to trigger auto-split { a: 81.67970161358052 }
m30999| Thu Jun 14 01:44:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|120||000000000000000000000000 min: { a: 526.919018850918 } max: { a: 542.4296058071777 } dataWritten: 210351 splitThreshold: 1048576
m30000| Thu Jun 14 01:44:37 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.ns, filling with zeroes...
m30000| Thu Jun 14 01:44:37 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.ns, size: 16MB, took 0.48 secs
m30000| Thu Jun 14 01:44:37 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.0, filling with zeroes...
m30000| Thu Jun 14 01:44:38 [initandlisten] connection accepted from 127.0.0.1:60405 #13 (13 connections now open)
m30001| Thu Jun 14 01:44:38 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shardKeyPattern: { a: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:44:38 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.0, size: 16MB, took 0.42 secs
m30000| Thu Jun 14 01:44:38 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.1, filling with zeroes...
m30000| Thu Jun 14 01:44:38 [migrateThread] build index test.foo { _id: 1 }
m30000| Thu Jun 14 01:44:38 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:38 [migrateThread] info: creating collection test.foo on add index
m30000| Thu Jun 14 01:44:38 [migrateThread] build index test.foo { a: 1.0 }
m30000| Thu Jun 14 01:44:38 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:38 [conn12] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:81 reslen:1918 910ms
m30001| Thu Jun 14 01:44:38 [conn5] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 533.8202672992276 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:5355 reslen:326 910ms
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 216.8904302452864 }
m30000| Thu Jun 14 01:44:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey } -> { a: 0.07367152018367129 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:38 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:38 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, from: "shard0001", splitKeys: [ { a: 300.8225209355869 } ], shardId: "test.foo-a_294.0222214358918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:38 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 344.8762285660836 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 400.6101810646703 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30999| Thu Jun 14 01:44:38 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 533.8202672992276 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|99||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 216.8904302452864 } dataWritten: 210494 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 211.1103849162028 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|89||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 623.3985075048967 } dataWritten: 210036 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 616.8718316819673 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 294.0222214358918 } dataWritten: 209948 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 292.3592361221199 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|83||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 309.3101713472285 } dataWritten: 210312 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, from: "shard0001", splitKeys: [ { a: 300.8225209355869 } ], shardId: "test.foo-a_294.0222214358918", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|172||000000000000000000000000 min: { a: 344.8762285660836 } max: { a: 353.2720479801309 } dataWritten: 209966 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 351.9652303806794 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|170||000000000000000000000000 min: { a: 400.6101810646703 } max: { a: 411.0287894698923 } dataWritten: 209817 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 407.5939609977327 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } dataWritten: 210321 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 64.69114998731429 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } dataWritten: 209852 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 513.4254688327931 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } dataWritten: 209758 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 823.2408769549046 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210565 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 648.0131109241261 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 840.7121644073931 } dataWritten: 210236 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 834.7496352155969 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 905.2934559328332 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 905.2934559328332 } dataWritten: 209922 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 899.1845092671508 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|105||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 188.6698238706465 } dataWritten: 210605 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 188.6698238706465 }
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 182.5941501519085 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|180||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 694.6501944983177 } dataWritten: 210718 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 685.0292821001574 } -->> { : 694.6501944983177 }
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 691.6814933408037 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|107||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 868.5788679342879 } dataWritten: 209808 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 868.5788679342879 }
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 862.62154422054 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|182||000000000000000000000000 min: { a: 664.5574284897642 } max: { a: 678.3563510786536 } dataWritten: 210056 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 664.5574284897642 } -->> { : 678.3563510786536 }
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 671.2287164292217 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:38 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:38 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 745.584605945794 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:38 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:44:38 [conn5] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:38 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:38 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 448.7677253145013 } ], shardId: "test.foo-a_441.0435238853461", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:38 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|132||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 752.6019558395919 } dataWritten: 210316 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 745.584605945794 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 209929 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 759.1493082451186 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } dataWritten: 209761 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 912.5776712657353 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } dataWritten: 210269 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 655.6700741228101 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 209883 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] chunk not full enough to trigger auto-split { a: 479.6700623861358 }
m30999| Thu Jun 14 01:44:38 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } dataWritten: 209810 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:38 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 448.7677253145013 } ], shardId: "test.foo-a_441.0435238853461", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 5ms sequenceNumber: 95 version: 1|184||4fd97a3b0d2fef4d6a507be2 based on: 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] warning: chunk manager reload forced for collection 'test.foo', config version is 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|183||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 463.2766201180535 } dataWritten: 210436 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 463.2766201180535 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 462.829488903602 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|179||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 685.0292821001574 } dataWritten: 210245 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 685.0292821001574 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 684.3172655381296 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 629.841934682977 } ], shardId: "test.foo-a_623.3985075048967", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 745.3945716991655 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 316.4092932363823 } ], shardId: "test.foo-a_309.3101713472285", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 210606 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 629.841934682977 } ], shardId: "test.foo-a_623.3985075048967", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|132||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 752.6019558395919 } dataWritten: 209949 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 745.3945716991655 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|139||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 92.91917824556573 } dataWritten: 210083 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 90.34145732724264 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } dataWritten: 210236 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 513.1497298821614 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } dataWritten: 210033 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 316.4092932363823 } ], shardId: "test.foo-a_309.3101713472285", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } dataWritten: 210524 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 270.7727012025019 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210362 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 479.623809216069 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|93||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 39.89992532263464 } dataWritten: 210018 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, from: "shard0001", splitKeys: [ { a: 32.17361959545262 } ], shardId: "test.foo-a_25.60273139230473", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, from: "shard0001", splitKeys: [ { a: 32.17361959545262 } ], shardId: "test.foo-a_25.60273139230473", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|141||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 599.2155367136296 } dataWritten: 210509 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 599.2155367136296 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 597.386816610537 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } dataWritten: 210114 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 240.2740585955363 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 240.2740585955363 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 3ms sequenceNumber: 96 version: 1|184||4fd97a3b0d2fef4d6a507be2 based on: 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] warning: chunk manager reload forced for collection 'test.foo', config version is 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } dataWritten: 209787 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 240.250648169133 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 240.250648169133 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } dataWritten: 209841 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 399.722901241435 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 146.6503611644078 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:39 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:44:39 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 533.3002315334993 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|123||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 146.6503611644078 } dataWritten: 209826 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 143.059724082713 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|120||000000000000000000000000 min: { a: 526.919018850918 } max: { a: 542.4296058071777 } dataWritten: 210698 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] warning: splitChunk failed - cmd: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 533.3002315334993 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" } result: { who: { _id: "test.foo", process: "domU-12-31-39-01-70-B4:30001:1339652668:318525290", state: 2, ts: ObjectId('4fd97a4532a28802daeee014'), when: new Date(1339652677329), who: "domU-12-31-39-01-70-B4:30001:1339652668:318525290:conn4:45957539", why: "migrate-{ a: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } dataWritten: 209907 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 166.2297991211299 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } dataWritten: 209870 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 578.7580235726676 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|11||000000000000000000000000 min: { a: MinKey } max: { a: 0.07367152018367129 } dataWritten: 189429 splitThreshold: 943718
m30001| Thu Jun 14 01:44:39 [conn5] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.07367152018367129 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:44:39 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 5, clonedBytes: 5325, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:44:39 [conn4] moveChunk setting version to: 2|0||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000000'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000000 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a72'), a: 907.3685837601903, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:44:39 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connected connection!
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30000
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] resetting shard version of test.foo on localhost:30000, version is zero
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x89968c0
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
m30999| Thu Jun 14 01:44:39 BackgroundJob starting: ConnectBG
m30001| Thu Jun 14 01:44:39 [initandlisten] connection accepted from 127.0.0.1:48982 #7 (7 connections now open)
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connected connection!
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30001
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30000| Thu Jun 14 01:44:39 [initandlisten] connection accepted from 127.0.0.1:60406 #14 (14 connections now open)
m30000| Thu Jun 14 01:44:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey } -> { a: 0.07367152018367129 }
m30000| Thu Jun 14 01:44:39 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652679350), what: "moveChunk.to", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.07367152018367129 }, step1 of 5: 1054, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 956 } }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|133||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 891.8750702869381 } dataWritten: 210327 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 891.8750702869381 }
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 888.7807799133452 }
m30000| Thu Jun 14 01:44:39 [initandlisten] connection accepted from 127.0.0.1:60408 #15 (15 connections now open)
m30001| Thu Jun 14 01:44:39 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 5, clonedBytes: 5325, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:44:39 [conn4] moveChunk updating self version to: 2|1||4fd97a3b0d2fef4d6a507be2 through { a: 0.07367152018367129 } -> { a: 12.55217658236718 } for collection 'test.foo'
m30001| Thu Jun 14 01:44:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-93", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652679360), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.07367152018367129 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:39 [conn4] doing delete inline
m30000| Thu Jun 14 01:44:39 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.1, size: 32MB, took 1.271 secs
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), globalVersion: Timestamp 2000|0, globalVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), reloadConfig: true, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } dataWritten: 210225 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 3ms sequenceNumber: 97 version: 2|1||4fd97a3b0d2fef4d6a507be2 based on: 1|184||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] insert will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: ObjectId('4fd97a4705a35677eff34a72'), a: 907.3685837601903, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30001| Thu Jun 14 01:44:39 [conn7] command admin.$cmd command: { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|184, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) W:42 reslen:307 305ms
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000001'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000001 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a73'), a: 847.1630756573431, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000002'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000002 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a74'), a: 247.1184157168664, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000003'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000003 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a75'), a: 916.2190032640715, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000004'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000004 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a76'), a: 572.0561972632207, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000005'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000005 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a77'), a: 654.2124981908914, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000006'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000006 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a78'), a: 921.8929672784257, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 912.1781609119864 }
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000007'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000007 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a79'), a: 993.4590822176408, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000008'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 948.0165404542549 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000008 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7a'), a: 757.1213485690921, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|121||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 948.0165404542549 } dataWritten: 210184 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:39 [conn4] moveChunk deleted: 7
m30001| Thu Jun 14 01:44:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000009'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30001| Thu Jun 14 01:44:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-94", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652679670), what: "moveChunk.from", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.07367152018367129 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2009, step5 of 6: 20, step6 of 6: 309 } }
m30001| Thu Jun 14 01:44:39 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey }, max: { a: 0.07367152018367129 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKey", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:506848 w:295285 reslen:37 2341ms
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000009 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7b'), a: 740.810152714928, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7c'), a: 229.4253019260617, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7d'), a: 240.979142480127, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7e'), a: 201.795695527185, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a7f'), a: 849.9093823607307, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a80'), a: 623.475579622287, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000000f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000000f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a81'), a: 372.9995658486266, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000010'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000010 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a82'), a: 344.1413817678431, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000011'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000011 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a83'), a: 910.5431166864572, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000012'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000012 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a84'), a: 746.9873851056577, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000013'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000013 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a85'), a: 424.3237104468461, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000014'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000014 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a86'), a: 356.6604200837776, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000015'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000015 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a87'), a: 124.0419988502943, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000016'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000016 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a88'), a: 890.9535807397186, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000017'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000017 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a89'), a: 814.8547054914952, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000018'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000018 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8a'), a: 771.1000253203911, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000019'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000019 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8b'), a: 258.1392500453531, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8c'), a: 471.3263058899433, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8d'), a: 502.6294245116392, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8e'), a: 1.566955821032945, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a8f'), a: 933.9048233548627, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a90'), a: 396.0725393548956, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000001f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000001f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a91'), a: 374.0819052170633, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000020'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000020 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a92'), a: 83.85208282721712, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000021'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000021 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a93'), a: 113.3030896994175, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000022'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000022 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a94'), a: 93.88968368670825, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000023'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000023 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a95'), a: 958.7246711122127, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 944.6593962676766 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000024'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000024 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a96'), a: 988.9060786587177, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000025'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000025 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a97'), a: 699.2697902238576, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000026'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000026 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a98'), a: 406.7603400779275, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000027'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000027 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a99'), a: 275.1635826577361, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000028'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000028 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9a'), a: 939.6956743898675, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000029'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000029 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9b'), a: 145.5555654318858, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9c'), a: 510.4393015705518, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9d'), a: 116.337769117888, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9e'), a: 263.2098745902042, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34a9f'), a: 240.1848528078046, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa0'), a: 520.1297245306855, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000002f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000002f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa1'), a: 364.1031907801835, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000030'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000030 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa2'), a: 443.1731350218895, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000031'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000031 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa3'), a: 774.2308829097593, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000032'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000032 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa4'), a: 463.0197565530408, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000033'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000033 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa5'), a: 986.7548009648609, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000034'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000034 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa6'), a: 204.0633820173382, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000035'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000035 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa7'), a: 636.7083790591574, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 337.6965417950217 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000036'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|117||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 337.6965417950217 } dataWritten: 210321 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000036 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa8'), a: 855.0146793415383, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 333.7709250673817 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000037'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000037 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aa9'), a: 217.8087738069777, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000038'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000038 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aaa'), a: 653.3199803606138, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000039'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000039 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aab'), a: 986.7377768734527, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aac'), a: 205.9140299460607, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aad'), a: 40.34301600728462, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aae'), a: 736.1673136088092, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aaf'), a: 545.7361067212142, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab0'), a: 395.4467625095348, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000003f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000003f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab1'), a: 508.4203839466858, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000040'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000040 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab2'), a: 516.2678197700943, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000041'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000041 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab3'), a: 725.2410187890212, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000042'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000042 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab4'), a: 11.99216803909309, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000043'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000043 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab5'), a: 625.5962463650177, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000044'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000044 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab6'), a: 85.45765767859392, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000045'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000045 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab7'), a: 260.0764324903251, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000046'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000046 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab8'), a: 426.4776123754754, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000047'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000047 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ab9'), a: 470.2376297125488, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000048'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000048 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aba'), a: 227.107428476358, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000049'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000049 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34abb'), a: 251.9195587078434, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34abc'), a: 585.8591180248293, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34abd'), a: 8.358734570687144, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn4] request split points lookup for chunk test.foo { : 784.2714953599016 } -->> { : 797.6352444405507 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|114||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 797.6352444405507 } dataWritten: 210327 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34abe'), a: 310.1888866783327, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 791.539521607368 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34abf'), a: 44.93055831994796, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac0'), a: 253.7537679533145, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000004f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000004f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac1'), a: 566.9035707305293, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000050'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000050 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac2'), a: 274.9184623223092, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000051'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000051 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac3'), a: 152.9578338155416, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000052'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000052 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac4'), a: 700.2082446699792, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000053'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000053 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac5'), a: 253.8830905905938, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000054'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000054 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac6'), a: 985.8865951519524, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000055'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000055 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac7'), a: 536.9336492313628, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000056'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000056 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac8'), a: 648.2504099377851, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000057'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000057 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ac9'), a: 706.5011180581538, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000058'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000058 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aca'), a: 674.8717660131171, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000059'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000059 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34acb'), a: 891.4258560052095, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34acc'), a: 621.7866546220729, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34acd'), a: 424.3792767110085, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ace'), a: 673.0762276911699, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34acf'), a: 240.913399139727, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad0'), a: 707.5698456678825, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000005f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000005f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad1'), a: 602.2184752898946, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000060'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000060 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad2'), a: 141.3108403958169, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000061'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000061 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad3'), a: 69.20506544214588, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000062'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000062 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad4'), a: 143.378593078793, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000063'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000063 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad5'), a: 814.7104049196108, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000064'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000064 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad6'), a: 46.69374159193207, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000065'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000065 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad7'), a: 409.521868487644, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000066'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000066 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad8'), a: 75.21129520803571, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000067'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000067 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ad9'), a: 126.2394441943927, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000068'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000068 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ada'), a: 205.7184717980205, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000069'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000069 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34adb'), a: 808.1155997080704, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34adc'), a: 288.2130918646992, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34add'), a: 198.8917907882042, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ade'), a: 816.0847231893933, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34adf'), a: 840.8973701745347, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae0'), a: 709.2180476957608, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000006f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000006f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae1'), a: 754.5152419179738, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000070'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000070 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae2'), a: 533.4748729026271, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000071'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000071 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae3'), a: 910.4015706112439, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:39 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 248.3080159156712 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:39 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 248.3080159156712 }, max: { a: 264.0825842924789 }, from: "shard0001", splitKeys: [ { a: 254.1395685736485 } ], shardId: "test.foo-a_248.3080159156712", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee015
m30001| Thu Jun 14 01:44:39 [conn2] splitChunk accepted at version 2|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-95", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652679767), what: "split", ns: "test.foo", details: { before: { min: { a: 248.3080159156712 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000072'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000072 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae4'), a: 3.20221559826972, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000073'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000073 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae5'), a: 310.0495327741978, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000074'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000074 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae6'), a: 814.4981974522601, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000075'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000075 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae7'), a: 379.512260123404, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000076'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000076 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae8'), a: 641.6084035565957, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000077'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000077 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ae9'), a: 450.0738695665958, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000078'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000078 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aea'), a: 943.3135989537511, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000079'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000079 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aeb'), a: 708.7305421428571, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aec'), a: 407.4162265551523, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aed'), a: 26.494710769699, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aee'), a: 886.9812102298489, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aef'), a: 155.3630587960056, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|86||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 264.0825842924789 } dataWritten: 209742 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 98 version: 2|3||4fd97a3b0d2fef4d6a507be2 based on: 2|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|86||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 264.0825842924789 } on: { a: 254.1395685736485 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|3, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 905.2934559328332 } dataWritten: 210756 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 99 version: 2|5||4fd97a3b0d2fef4d6a507be2 based on: 2|3||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 905.2934559328332 } on: { a: 898.6566515076229 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|5, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|168||000000000000000000000000 min: { a: 721.9923962351373 } max: { a: 729.8361633348899 } dataWritten: 210083 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 728.293170967589 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|181||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 664.5574284897642 } dataWritten: 210204 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 663.7320312268614 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|5||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af0'), a: 419.9964940868513, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|5||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|5, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } dataWritten: 210436 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 6.992460158443903 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|129||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 552.1925267328988 } dataWritten: 210703 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 548.8163291852984 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210665 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 647.0218263347224 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|128||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 498.2021416153332 } dataWritten: 210514 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 100 version: 2|7||4fd97a3b0d2fef4d6a507be2 based on: 2|5||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|128||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 498.2021416153332 } on: { a: 490.1028421929578 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|7, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } dataWritten: 209976 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 655.3330134045702 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } dataWritten: 210249 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 101 version: 2|9||4fd97a3b0d2fef4d6a507be2 based on: 2|7||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|85||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 248.3080159156712 } on: { a: 240.0709323500288 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|9, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000007f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000007f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|9||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af1'), a: 582.5233141222343, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|9||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|9, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } dataWritten: 210561 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 343.9182092253141 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 201505 splitThreshold: 943718
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|83||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 309.3101713472285 } dataWritten: 210285 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 102 version: 2|11||4fd97a3b0d2fef4d6a507be2 based on: 2|9||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|83||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 309.3101713472285 } on: { a: 300.0603324337813 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|11, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|159||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 955.9182567868356 } dataWritten: 210449 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000080'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000080 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af2'), a: 883.3815068143728, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|11, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:39 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 891.8750702869381 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:44:39 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 891.8750702869381 }, max: { a: 905.2934559328332 }, from: "shard0001", splitKeys: [ { a: 898.6566515076229 } ], shardId: "test.foo-a_891.8750702869381", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee016
m30001| Thu Jun 14 01:44:39 [conn2] splitChunk accepted at version 2|3||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-96", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652679795), what: "split", ns: "test.foo", details: { before: { min: { a: 891.8750702869381 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 721.9923962351373 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:44:39 [conn4] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 664.5574284897642 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:39 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 483.6281235892167 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:39 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 483.6281235892167 }, max: { a: 498.2021416153332 }, from: "shard0001", splitKeys: [ { a: 490.1028421929578 } ], shardId: "test.foo-a_483.6281235892167", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee017
m30001| Thu Jun 14 01:44:39 [conn2] splitChunk accepted at version 2|5||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-97", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652679858), what: "split", ns: "test.foo", details: { before: { min: { a: 483.6281235892167 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 233.8565055904641 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:39 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 240.0709323500288 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee018
m30001| Thu Jun 14 01:44:39 [conn2] splitChunk accepted at version 2|7||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-98", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652679888), what: "split", ns: "test.foo", details: { before: { min: { a: 233.8565055904641 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:39 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 294.0222214358918 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:44:39 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, from: "shard0001", splitKeys: [ { a: 300.0603324337813 } ], shardId: "test.foo-a_294.0222214358918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee019
m30001| Thu Jun 14 01:44:39 [conn2] splitChunk accepted at version 2|9||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-99", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652679915), what: "split", ns: "test.foo", details: { before: { min: { a: 294.0222214358918 }, max: { a: 309.3101713472285 }, lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:39 [conn2] request split points lookup for chunk test.foo { : 948.0165404542549 } -->> { : 955.9182567868356 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000081'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000081 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af3'), a: 765.8633173783686, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000082'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000082 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af4'), a: 412.2016200297781, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000083'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000083 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af5'), a: 986.5696292779018, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000084'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000084 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af6'), a: 455.5806387995198, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000085'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000085 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af7'), a: 846.6590356694769, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000086'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000086 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af8'), a: 172.8092077404637, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000087'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000087 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34af9'), a: 731.404854473839, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000088'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000088 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34afa'), a: 367.8088344064113, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000089'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000089 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34afb'), a: 834.1504438709943, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] chunk not full enough to trigger auto-split { a: 953.9936196507325 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34afc'), a: 779.3486413738742, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34afd'), a: 604.9223406066442, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34afe'), a: 890.7582877560965, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34aff'), a: 501.1656904360292, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b00'), a: 489.5057691058993, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000008f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000008f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b01'), a: 471.0527162659641, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000090'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000090 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b02'), a: 627.695943725211, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000091'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000091 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b03'), a: 735.0116421935767, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000092'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000092 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b04'), a: 933.7142439646083, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000093'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000093 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b05'), a: 712.8654636768897, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000094'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000094 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b06'), a: 67.92561413222219, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000095'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000095 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b07'), a: 701.3591660812175, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000096'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000096 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b08'), a: 557.2058497321078, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000097'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000097 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b09'), a: 484.6163537957908, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000098'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000098 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0a'), a: 980.4227081015898, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000099'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000099 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0b'), a: 68.55435110919905, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0c'), a: 96.70709183032189, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0d'), a: 295.4758989144528, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0e'), a: 738.0044740673893, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b0f'), a: 108.5480052506334, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b10'), a: 245.1326152284469, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000009f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000009f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b11'), a: 604.9014919029331, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b12'), a: 671.1869900458937, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b13'), a: 913.4877504889663, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b14'), a: 557.0402756139416, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|132||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 752.6019558395919 } dataWritten: 210755 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b15'), a: 48.09141756981994, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b16'), a: 590.9198549349835, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b17'), a: 671.1184564238968, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b18'), a: 78.27707208530799, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b19'), a: 629.6864221495796, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1a'), a: 690.6782425012722, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000a9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000a9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1b'), a: 927.4883138962837, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000aa'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000aa needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1c'), a: 145.1586361746666, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ab'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ab needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1d'), a: 724.8605098675532, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ac'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ac needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1e'), a: 140.9309926956586, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ad'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ad needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b1f'), a: 32.05948053547225, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ae'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ae needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b20'), a: 291.6897237595994, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000af'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000af needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b21'), a: 9.169455267369898, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b22'), a: 822.7927915993254, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b23'), a: 508.1734829240568, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b24'), a: 227.7465209702169, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b25'), a: 502.001760902766, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b26'), a: 812.7071885636732, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b27'), a: 920.923895529183, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b28'), a: 406.8061006374901, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b29'), a: 753.2041061249411, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2a'), a: 426.2966921843067, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000b9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000b9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2b'), a: 924.8392395326599, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ba'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ba needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2c'), a: 788.7224130348234, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000bb'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000bb needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2d'), a: 828.4287587065502, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000bc'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000bc needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2e'), a: 857.0545086876662, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000bd'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000bd needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b2f'), a: 565.3817424307219, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000be'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000be needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b30'), a: 21.21197225289395, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000bf'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000bf needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b31'), a: 155.9780994570565, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b32'), a: 704.3427365507744, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b33'), a: 581.1183100361268, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn4] request split points lookup for chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b34'), a: 413.1266227736927, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b35'), a: 486.3972280809381, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b36'), a: 975.1994797668273, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b37'), a: 407.5221986663245, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b38'), a: 753.2550224409367, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b39'), a: 185.7924522342924, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3a'), a: 465.7476194195849, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000c9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000c9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3b'), a: 818.714948347768, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ca'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ca needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3c'), a: 240.3081918164849, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000cb'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000cb needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3d'), a: 834.8302083607635, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000cc'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000cc needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3e'), a: 780.5363293823883, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000cd'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000cd needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b3f'), a: 660.060293575257, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ce'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ce needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b40'), a: 532.9087436508199, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000cf'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000cf needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b41'), a: 637.5969071787023, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b42'), a: 631.9356533326049, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b43'), a: 922.3725925484108, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b44'), a: 202.8175344266637, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b45'), a: 457.9777675373292, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b46'), a: 476.7437722066822, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b47'), a: 487.6046251844161, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b48'), a: 680.2952193386163, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b49'), a: 395.9379807550208, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4a'), a: 993.4552088543904, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000d9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000d9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4b'), a: 236.230677664592, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000da'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000da needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4c'), a: 117.4107794447438, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000db'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000db needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4d'), a: 692.0674359796924, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000dc'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000dc needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4e'), a: 381.1776897040198, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000dd'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000dd needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b4f'), a: 848.1424151490751, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000de'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000de needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b50'), a: 463.2726177196349, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000df'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000df needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b51'), a: 405.9758032852766, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b52'), a: 969.9741769036611, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b53'), a: 551.7270104786116, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b54'), a: 444.5722116024229, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b55'), a: 752.6016212204808, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b56'), a: 172.4459056513082, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b57'), a: 279.8980338728525, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b58'), a: 75.49417755959998, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b59'), a: 200.2261262750855, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5a'), a: 377.6607393615278, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000e9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000e9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5b'), a: 177.2015943770042, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ea'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ea needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5c'), a: 782.9610210547835, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000eb'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000eb needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5d'), a: 434.9581348879844, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ec'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ec needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5e'), a: 991.071397642079, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ed'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ed needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b5f'), a: 168.7213848650012, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ee'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ee needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b60'), a: 226.344278316284, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ef'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ef needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b61'), a: 78.68372675355806, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f0'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f0 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b62'), a: 457.8554994101405, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f1'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f1 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b63'), a: 349.3584665215283, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f2'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f2 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b64'), a: 415.1885886041989, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 738.6198156338151 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:39 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 744.9210849408088 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:39 [conn4] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4732a28802daeee01a
m30001| Thu Jun 14 01:44:39 [conn4] splitChunk accepted at version 2|11||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:39 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:39-100", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48973", time: new Date(1339652679978), what: "split", ns: "test.foo", details: { before: { min: { a: 738.6198156338151 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:39 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f3'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f3 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b65'), a: 610.9504881322061, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f4'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f4 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b66'), a: 772.1501543524299, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f5'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f5 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b67'), a: 281.1622430951881, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f6'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f6 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b68'), a: 784.15851687187, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f7'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f7 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b69'), a: 289.9832037354802, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f8'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f8 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6a'), a: 670.9415036178759, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000f9'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000f9 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6b'), a: 896.7417259218741, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000fa'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000fa needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6c'), a: 471.3327166288033, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000fb'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000fb needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6d'), a: 994.003527833074, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000fc'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000fc needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6e'), a: 438.9097430610688, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000fd'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000fd needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b6f'), a: 598.3868991509635, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000fe'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000fe needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b70'), a: 164.1335694321969, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a4700000000000000ff'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a4700000000000000ff needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b71'), a: 811.2538361919611, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 103 version: 2|13||4fd97a3b0d2fef4d6a507be2 based on: 2|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000100'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000100 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b72'), a: 9.318229576473659, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:39 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|13, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x89967d0
m30999| Thu Jun 14 01:44:39 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|132||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 752.6019558395919 } on: { a: 744.9210849408088 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|13, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:39 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000101'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000101 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b73'), a: 315.3819198134192, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000102'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000102 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b74'), a: 526.5947575176798, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000103'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000103 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b75'), a: 895.6012711041731, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000104'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000104 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b76'), a: 836.0752814897011, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000105'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000105 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b77'), a: 736.0934796295693, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000106'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000106 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b78'), a: 402.3332318419428, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000107'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000107 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b79'), a: 574.324445484137, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000108'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000108 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7a'), a: 833.3857358301105, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000109'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000109 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7b'), a: 666.7691084802626, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7c'), a: 699.5061166808503, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7d'), a: 945.6629648656165, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7e'), a: 531.948513377498, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b7f'), a: 485.5817814792257, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b80'), a: 107.1969905866053, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000010f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000010f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b81'), a: 235.9346112711625, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000110'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000110 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b82'), a: 484.4455619918878, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000111'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000111 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b83'), a: 958.3087982786095, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000112'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000112 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b84'), a: 567.0904172123488, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000113'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000113 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b85'), a: 999.5035947000611, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000114'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000114 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b86'), a: 868.1067489111169, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000115'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000115 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b87'), a: 261.1343707011549, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000116'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000116 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b88'), a: 729.957197355367, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000117'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000117 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b89'), a: 839.6084148150777, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000118'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000118 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8a'), a: 178.4240020930114, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000119'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000119 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8b'), a: 916.4171002536395, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8c'), a: 463.387333887885, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8d'), a: 704.0349815242002, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8e'), a: 885.6274364792313, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b8f'), a: 488.8648650838685, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b90'), a: 845.3712664909325, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000011f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000011f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b91'), a: 966.8472300571345, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000120'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000120 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b92'), a: 934.3247786443949, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000121'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000121 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b93'), a: 312.4113115555117, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000122'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000122 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b94'), a: 977.7445996422142, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000123'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000123 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b95'), a: 445.8640553681862, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000124'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000124 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b96'), a: 6.745164125947944, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000125'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000125 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b97'), a: 222.1898161048511, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000126'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000126 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b98'), a: 384.1749756510033, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000127'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000127 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b99'), a: 225.2974607166868, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000128'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000128 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9a'), a: 541.3166734007444, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000129'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000129 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9b'), a: 251.3520609808011, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9c'), a: 719.2842890652718, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9d'), a: 653.9442734319567, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9e'), a: 600.8467305121994, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34b9f'), a: 519.080281034518, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba0'), a: 8.663296458059522, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000012f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000012f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba1'), a: 171.5495277203948, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000130'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000130 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba2'), a: 818.3628517342058, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000131'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000131 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba3'), a: 83.99343191185882, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000132'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000132 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba4'), a: 663.9496510941884, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000133'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000133 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba5'), a: 679.4364215336383, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000134'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000134 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba6'), a: 813.7111061112272, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000135'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000135 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba7'), a: 366.2494780258409, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000136'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000136 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba8'), a: 72.80481311037323, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000137'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000137 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34ba9'), a: 16.22471301800232, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000138'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000138 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34baa'), a: 670.2978016860859, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000139'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000139 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bab'), a: 851.9972263321309, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bac'), a: 650.5387817047582, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bad'), a: 725.4352653078963, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013c'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013c needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bae'), a: 288.9000213047946, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013d'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013d needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34baf'), a: 185.4328014890324, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013e'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013e needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb0'), a: 858.2962227861453, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000013f'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000013f needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb1'), a: 709.0056947058592, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000140'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000140 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb2'), a: 694.2583936357311, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000141'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000141 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb3'), a: 426.5676714312199, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000142'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000142 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb4'), a: 842.131356625412, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000143'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000143 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb5'), a: 549.2181338072103, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000144'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000144 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb6'), a: 660.206009556826, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000145'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000145 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb7'), a: 206.9162105550727, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000146'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000146 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb8'), a: 27.96287895614524, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000147'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000147 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bb9'), a: 251.2671336452902, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000148'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000148 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bba'), a: 340.3751232161024, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a470000000000000149'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a470000000000000149 needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bbb'), a: 750.6627697306262, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000014a'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000014a needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bbc'), a: 219.3147372290111, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", id: ObjectId('4fd97a47000000000000014b'), connectionId: 3, instanceIdent: "domU-12-31-39-01-70-B4:30001", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), yourVersion: Timestamp 1000|0, yourVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), msg: BinData }, ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] connectionId: domU-12-31-39-01-70-B4:30001:3 writebackId: 4fd97a47000000000000014b needVersion : 2|0||4fd97a3b0d2fef4d6a507be2 mine : 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] op: insert len: 1094 ns: test.foo{ _id: ObjectId('4fd97a4705a35677eff34bbd'), a: 909.6716857971105, y: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." }
m30999| Thu Jun 14 01:44:40 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 2|0||4fd97a3b0d2fef4d6a507be2, at version 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } dataWritten: 210464 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 64.1006234117244 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } dataWritten: 210426 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 411.0287894698923 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 411.0287894698923 }, max: { a: 427.2300955074828 }, from: "shard0001", splitKeys: [ { a: 417.3437896431063 } ], shardId: "test.foo-a_411.0287894698923", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee01b
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|13||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-101", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680074), what: "split", ns: "test.foo", details: { before: { min: { a: 411.0287894698923 }, max: { a: 427.2300955074828 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 417.3437896431063 }, max: { a: 427.2300955074828 }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 104 version: 2|15||4fd97a3b0d2fef4d6a507be2 based on: 2|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|69||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 427.2300955074828 } on: { a: 417.3437896431063 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|15, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 206076 splitThreshold: 943718
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|181||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 664.5574284897642 } dataWritten: 209767 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 664.5574284897642 }
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 663.5503081833705 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|167||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 721.9923962351373 } dataWritten: 210545 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 720.0040037855679 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|129||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 552.1925267328988 } dataWritten: 209750 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 548.5240984034862 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } dataWritten: 210183 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 260.6018188862606 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 721.9923962351373 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } dataWritten: 210002 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 441.0435238853461 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 447.8806134954977 } ], shardId: "test.foo-a_441.0435238853461", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee01c
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|15||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-102", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680252), what: "split", ns: "test.foo", details: { before: { min: { a: 441.0435238853461 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 105 version: 2|17||4fd97a3b0d2fef4d6a507be2 based on: 2|15||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 456.4586339452165 } on: { a: 447.8806134954977 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|17, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { a: 744.9210849408088 } max: { a: 752.6019558395919 } dataWritten: 210062 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 744.9210849408088 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:44:40 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.4, filling with zeroes...
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 750.9059398498323 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|97||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 441.0435238853461 } dataWritten: 210578 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 427.2300955074828 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 427.2300955074828 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 427.2300955074828 }, max: { a: 441.0435238853461 }, from: "shard0001", splitKeys: [ { a: 433.3806610330477 } ], shardId: "test.foo-a_427.2300955074828", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee01d
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|17||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-103", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680293), what: "split", ns: "test.foo", details: { before: { min: { a: 427.2300955074828 }, max: { a: 441.0435238853461 }, lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 106 version: 2|19||4fd97a3b0d2fef4d6a507be2 based on: 2|17||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|97||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 441.0435238853461 } on: { a: 433.3806610330477 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|19, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|107||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 868.5788679342879 } dataWritten: 210734 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 855.8703567421647 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 855.8703567421647 }, max: { a: 868.5788679342879 }, from: "shard0001", splitKeys: [ { a: 861.9626177544285 } ], shardId: "test.foo-a_855.8703567421647", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee01e
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|19||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-104", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680303), what: "split", ns: "test.foo", details: { before: { min: { a: 855.8703567421647 }, max: { a: 868.5788679342879 }, lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, lastmod: Timestamp 2000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 111.0431509615952 }
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 107 version: 2|21||4fd97a3b0d2fef4d6a507be2 based on: 2|19||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|107||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 868.5788679342879 } on: { a: 861.9626177544285 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|21, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|135||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 111.0431509615952 } dataWritten: 210627 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 108.1236206750289 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|154||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 998.3975234740553 } dataWritten: 209771 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 998.3975234740553 }
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 992.1699783025936 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|17||000000000000000000000000 min: { a: 447.8806134954977 } max: { a: 456.4586339452165 } dataWritten: 210393 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 454.1581572601996 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 447.8806134954977 } -->> { : 456.4586339452165 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|123||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 146.6503611644078 } dataWritten: 209821 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 146.6503611644078 }
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 142.5879713160653 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 204.0577089538382 } dataWritten: 210131 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 108 version: 2|23||4fd97a3b0d2fef4d6a507be2 based on: 2|21||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 204.0577089538382 } on: { a: 194.8927257678023 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|23, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 57.56464668319472 } dataWritten: 210175 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 54.28697483444711 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } dataWritten: 209874 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 109 version: 2|25||4fd97a3b0d2fef4d6a507be2 based on: 2|23||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 327.5292321238884 } on: { a: 315.9151551096841 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|25, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|119||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 526.919018850918 } dataWritten: 210557 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 522.0165492741297 }
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 }
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 189606 splitThreshold: 943718
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|130||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 563.897889911273 } dataWritten: 210395 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 110 version: 2|27||4fd97a3b0d2fef4d6a507be2 based on: 2|25||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|130||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 563.897889911273 } on: { a: 558.0115575910545 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|27, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|99||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 216.8904302452864 } dataWritten: 210499 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 111 version: 2|29||4fd97a3b0d2fef4d6a507be2 based on: 2|27||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|99||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 216.8904302452864 } on: { a: 209.8684815227433 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|29, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } dataWritten: 210474 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 578.0618233691216 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|133||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 891.8750702869381 } dataWritten: 210365 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 888.2139575450998 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|20||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 861.9626177544285 } dataWritten: 210671 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 861.6096871705487 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 210429 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 983.1005315016527 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } dataWritten: 209876 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 6.588314688079078 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|150||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 714.0536251380356 } dataWritten: 210221 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 709.7889422318788 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 209837 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 299.6349472494003 }
m30000| Thu Jun 14 01:44:40 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } dataWritten: 210767 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 496.0008165086051 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|165||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 74.43717892117874 } dataWritten: 210731 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 72.50435565385116 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 840.7121644073931 } dataWritten: 210035 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 112 version: 2|31||4fd97a3b0d2fef4d6a507be2 based on: 2|29||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:40 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 840.7121644073931 } on: { a: 833.5963963333859 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|31, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:40 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:40 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } dataWritten: 210632 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:40 [conn] chunk not full enough to trigger auto-split { a: 215.7255173071945 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 210045 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 970.7415486535277 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|105||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 188.6698238706465 } dataWritten: 210391 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 113 version: 2|33||4fd97a3b0d2fef4d6a507be2 based on: 2|31||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|105||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 188.6698238706465 } on: { a: 181.7281932506388 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|33, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|31||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 840.7121644073931 } dataWritten: 209774 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 839.7168084923148 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } dataWritten: 210303 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 114 version: 2|35||4fd97a3b0d2fef4d6a507be2 based on: 2|33||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|101||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 918.4259760765641 } on: { a: 910.9608546053483 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|35, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 210178 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 209.6031585403729 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { a: 240.0709323500288 } max: { a: 248.3080159156712 } dataWritten: 209773 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 245.7986581905222 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210498 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 646.0797781285229 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } dataWritten: 210719 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 260.0764324903251 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|91||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 392.8718206829087 } dataWritten: 210697 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 115 version: 2|37||4fd97a3b0d2fef4d6a507be2 based on: 2|35||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|91||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 392.8718206829087 } on: { a: 383.7239757530736 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|37, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } dataWritten: 209864 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 503.745324530391 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 199544 splitThreshold: 943718
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|183||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 463.2766201180535 } dataWritten: 210212 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 461.969554780359 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } dataWritten: 209916 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 215.5097674803825 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|6||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 490.1028421929578 } dataWritten: 210414 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 489.3768374881246 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 378.3565272980204 } dataWritten: 210707 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 116 version: 2|39||4fd97a3b0d2fef4d6a507be2 based on: 2|37||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 378.3565272980204 } on: { a: 369.0981926515277 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|39, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 294.0222214358918 } dataWritten: 210299 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 290.9340145140106 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 209759 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 982.8009982747061 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } dataWritten: 209841 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 117 version: 2|41||4fd97a3b0d2fef4d6a507be2 based on: 2|39||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|103||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 277.1560315461681 } on: { a: 269.785248844529 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|41, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|39||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 378.3565272980204 } dataWritten: 210198 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 374.8422996395951 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|20||000000000000000000000000 min: { a: 855.8703567421647 } max: { a: 861.9626177544285 } dataWritten: 210779 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 861.3146546434097 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|15||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 427.2300955074828 } dataWritten: 210297 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 422.9683260468419 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|114||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 797.6352444405507 } dataWritten: 209853 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 118 version: 2|43||4fd97a3b0d2fef4d6a507be2 based on: 2|41||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|114||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 797.6352444405507 } on: { a: 790.298943411581 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|43, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|26||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 558.0115575910545 } dataWritten: 210332 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 557.5400485686073 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } dataWritten: 210296 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 511.8007417068493 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } dataWritten: 210595 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 933.4472204645439 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 209975 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 757.8854232016437 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 209763 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 645.823654573342 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } dataWritten: 210659 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 119 version: 2|45||4fd97a3b0d2fef4d6a507be2 based on: 2|43||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|109||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 977.1164746659301 } on: { a: 970.39026226179 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|45, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|93||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 39.89992532263464 } dataWritten: 210493 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 120 version: 2|47||4fd97a3b0d2fef4d6a507be2 based on: 2|45||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|93||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 39.89992532263464 } on: { a: 30.85678137192671 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|47, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 240.0709323500288 } dataWritten: 210686 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 239.0410672479776 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 210065 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 757.8316005738793 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210679 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 833.1216414081257 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 188.6698238706465 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 188.6698238706465 }, max: { a: 204.0577089538382 }, from: "shard0001", splitKeys: [ { a: 194.8927257678023 } ], shardId: "test.foo-a_188.6698238706465", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee01f
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|21||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-105", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680433), what: "split", ns: "test.foo", details: { before: { min: { a: 188.6698238706465 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, lastmod: Timestamp 2000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 2000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 309.3101713472285 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 315.9151551096841 } ], shardId: "test.foo-a_309.3101713472285", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee020
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|23||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-106", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680530), what: "split", ns: "test.foo", details: { before: { min: { a: 309.3101713472285 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 315.9151551096841 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 2000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 526.919018850918 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 552.1925267328988 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 552.1925267328988 }, max: { a: 563.897889911273 }, from: "shard0001", splitKeys: [ { a: 558.0115575910545 } ], shardId: "test.foo-a_552.1925267328988", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee021
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|25||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-107", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680648), what: "split", ns: "test.foo", details: { before: { min: { a: 552.1925267328988 }, max: { a: 563.897889911273 }, lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 204.0577089538382 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 204.0577089538382 }, max: { a: 216.8904302452864 }, from: "shard0001", splitKeys: [ { a: 209.8684815227433 } ], shardId: "test.foo-a_204.0577089538382", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee022
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|27||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-108", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680666), what: "split", ns: "test.foo", details: { before: { min: { a: 204.0577089538382 }, max: { a: 216.8904302452864 }, lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 891.8750702869381 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 861.9626177544285 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 703.7520953686671 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 66.37486853611429 } -->> { : 74.43717892117874 }
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:40 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 827.5642418995561 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:40 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 827.5642418995561 }, max: { a: 840.7121644073931 }, from: "shard0001", splitKeys: [ { a: 833.5963963333859 } ], shardId: "test.foo-a_827.5642418995561", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:40 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4832a28802daeee023
m30001| Thu Jun 14 01:44:40 [conn2] splitChunk accepted at version 2|29||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:40 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:40-109", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652680935), what: "split", ns: "test.foo", details: { before: { min: { a: 827.5642418995561 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:40 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:40 [conn2] request split points lookup for chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 188.6698238706465 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 176.0230312595962 } -->> { : 188.6698238706465 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 176.0230312595962 }, max: { a: 188.6698238706465 }, from: "shard0001", splitKeys: [ { a: 181.7281932506388 } ], shardId: "test.foo-a_176.0230312595962", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee024
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|31||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-110", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681028), what: "split", ns: "test.foo", details: { before: { min: { a: 176.0230312595962 }, max: { a: 188.6698238706465 }, lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, lastmod: Timestamp 2000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 833.5963963333859 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 905.2934559328332 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 905.2934559328332 }, max: { a: 918.4259760765641 }, from: "shard0001", splitKeys: [ { a: 910.9608546053483 } ], shardId: "test.foo-a_905.2934559328332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee025
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|33||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-111", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681174), what: "split", ns: "test.foo", details: { before: { min: { a: 905.2934559328332 }, max: { a: 918.4259760765641 }, lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, lastmod: Timestamp 2000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 240.0709323500288 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 378.3565272980204 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 378.3565272980204 }, max: { a: 392.8718206829087 }, from: "shard0001", splitKeys: [ { a: 383.7239757530736 } ], shardId: "test.foo-a_378.3565272980204", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee026
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|35||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-112", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681261), what: "split", ns: "test.foo", details: { before: { min: { a: 378.3565272980204 }, max: { a: 392.8718206829087 }, lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, lastmod: Timestamp 2000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 463.2766201180535 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 490.1028421929578 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 363.6779080113047 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 369.0981926515277 } ], shardId: "test.foo-a_363.6779080113047", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee027
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|37||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-113", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681347), what: "split", ns: "test.foo", details: { before: { min: { a: 363.6779080113047 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 2000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 264.0825842924789 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 264.0825842924789 }, max: { a: 277.1560315461681 }, from: "shard0001", splitKeys: [ { a: 269.785248844529 } ], shardId: "test.foo-a_264.0825842924789", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee028
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|39||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-114", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681386), what: "split", ns: "test.foo", details: { before: { min: { a: 264.0825842924789 }, max: { a: 277.1560315461681 }, lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 855.8703567421647 } -->> { : 861.9626177544285 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 417.3437896431063 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 784.2714953599016 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 784.2714953599016 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 784.2714953599016 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 790.298943411581 } ], shardId: "test.foo-a_784.2714953599016", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee029
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|41||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-115", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681504), what: "split", ns: "test.foo", details: { before: { min: { a: 784.2714953599016 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 2000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 558.0115575910545 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 964.9150523226922 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 964.9150523226922 }, max: { a: 977.1164746659301 }, from: "shard0001", splitKeys: [ { a: 970.39026226179 } ], shardId: "test.foo-a_964.9150523226922", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee02a
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|43||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-116", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681667), what: "split", ns: "test.foo", details: { before: { min: { a: 964.9150523226922 }, max: { a: 977.1164746659301 }, lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 25.60273139230473 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, from: "shard0001", splitKeys: [ { a: 30.85678137192671 } ], shardId: "test.foo-a_25.60273139230473", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee02b
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|45||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-117", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681678), what: "split", ns: "test.foo", details: { before: { min: { a: 25.60273139230473 }, max: { a: 39.89992532263464 }, lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, lastmod: Timestamp 2000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 240.0709323500288 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 447.8806134954977 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 400.6101810646703 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 985.6773819217475 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 985.6773819217475 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 991.2502100401695 } ], shardId: "test.foo-a_985.6773819217475", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee02c
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|47||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-118", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681805), what: "split", ns: "test.foo", details: { before: { min: { a: 985.6773819217475 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 2000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 315.9151551096841 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 315.9151551096841 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 315.9151551096841 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 321.3459727153073 } ], shardId: "test.foo-a_315.9151551096841", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee02d
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|49||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-119", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681828), what: "split", ns: "test.foo", details: { before: { min: { a: 315.9151551096841 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 2000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 948.0165404542549 } -->> { : 955.9182567868356 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 526.919018850918 }
m30001| Thu Jun 14 01:44:41 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 515.6449770586091 } -->> { : 526.919018850918 }
m30001| Thu Jun 14 01:44:41 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 515.6449770586091 }, max: { a: 526.919018850918 }, from: "shard0001", splitKeys: [ { a: 521.3538677091974 } ], shardId: "test.foo-a_515.6449770586091", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:41 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4932a28802daeee02e
m30001| Thu Jun 14 01:44:41 [conn2] splitChunk accepted at version 2|51||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:41 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:41-120", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652681870), what: "split", ns: "test.foo", details: { before: { min: { a: 515.6449770586091 }, max: { a: 526.919018850918 }, lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:41 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 490.1028421929578 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 955.9182567868356 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:41 [conn2] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 558.0115575910545 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 12.55217658236718 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 12.55217658236718 }, from: "shard0001", splitKeys: [ { a: 5.826356493812579 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee02f
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|53||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-121", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682067), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 12.55217658236718 }, lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, lastmod: Timestamp 2000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, lastmod: Timestamp 2000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 927.6813889109981 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 927.6813889109981 }, max: { a: 938.1160661714987 }, from: "shard0001", splitKeys: [ { a: 933.0462189495814 } ], shardId: "test.foo-a_927.6813889109981", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee030
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|55||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-122", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682075), what: "split", ns: "test.foo", details: { before: { min: { a: 927.6813889109981 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 254.1395685736485 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 948.0165404542549 } -->> { : 955.9182567868356 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 868.5788679342879 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 868.5788679342879 }, max: { a: 882.331873780809 }, from: "shard0001", splitKeys: [ { a: 873.8718881199745 } ], shardId: "test.foo-a_868.5788679342879", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee031
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|57||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-123", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682131), what: "split", ns: "test.foo", details: { before: { min: { a: 868.5788679342879 }, max: { a: 882.331873780809 }, lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, lastmod: Timestamp 2000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 933.0462189495814 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 815.7684070742035 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 815.7684070742035 }, max: { a: 827.5642418995561 }, from: "shard0001", splitKeys: [ { a: 821.178966084225 } ], shardId: "test.foo-a_815.7684070742035", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee032
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|59||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-124", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682195), what: "split", ns: "test.foo", details: { before: { min: { a: 815.7684070742035 }, max: { a: 827.5642418995561 }, lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 463.2766201180535 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 610.6068178358934 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 610.6068178358934 }, max: { a: 623.3985075048967 }, from: "shard0001", splitKeys: [ { a: 615.3266278873516 } ], shardId: "test.foo-a_610.6068178358934", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee033
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|61||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-125", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682211), what: "split", ns: "test.foo", details: { before: { min: { a: 610.6068178358934 }, max: { a: 623.3985075048967 }, lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 833.5963963333859 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 521.3538677091974 } -->> { : 526.919018850918 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 703.7520953686671 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 703.7520953686671 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 703.7520953686671 }, max: { a: 714.0536251380356 }, from: "shard0001", splitKeys: [ { a: 708.8986861220777 } ], shardId: "test.foo-a_703.7520953686671", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee034
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|63||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-126", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682257), what: "split", ns: "test.foo", details: { before: { min: { a: 703.7520953686671 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 92.91917824556573 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 447.8806134954977 } dataWritten: 210219 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 446.9471374437152 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } dataWritten: 210050 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 63.23926653028988 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|170||000000000000000000000000 min: { a: 400.6101810646703 } max: { a: 411.0287894698923 } dataWritten: 209715 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 405.9666390785398 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|154||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 998.3975234740553 } dataWritten: 210521 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 121 version: 2|49||4fd97a3b0d2fef4d6a507be2 based on: 2|47||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|154||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 998.3975234740553 } on: { a: 991.2502100401695 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|49, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|25||000000000000000000000000 min: { a: 315.9151551096841 } max: { a: 327.5292321238884 } dataWritten: 210523 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 122 version: 2|51||4fd97a3b0d2fef4d6a507be2 based on: 2|49||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|25||000000000000000000000000 min: { a: 315.9151551096841 } max: { a: 327.5292321238884 } on: { a: 321.3459727153073 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|51, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|43||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 797.6352444405507 } dataWritten: 210397 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 795.8996870234697 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|131||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 738.6198156338151 } dataWritten: 210516 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 734.9145263096132 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|159||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 955.9182567868356 } dataWritten: 210124 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 953.0282752709245 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } dataWritten: 210154 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 577.3929810688113 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|119||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 526.919018850918 } dataWritten: 210058 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 123 version: 2|53||4fd97a3b0d2fef4d6a507be2 based on: 2|51||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:41 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|119||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 526.919018850918 } on: { a: 521.3538677091974 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|53, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:41 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|6||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 490.1028421929578 } dataWritten: 210030 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 489.0069122759116 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 209840 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 209.1972892944536 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|160||000000000000000000000000 min: { a: 955.9182567868356 } max: { a: 964.9150523226922 } dataWritten: 210472 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 961.5514213179725 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|39||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 378.3565272980204 } dataWritten: 210730 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 374.5587648631592 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|43||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 797.6352444405507 } dataWritten: 210472 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 795.7594612378664 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } dataWritten: 209884 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 654.0408696882598 }
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 203977 splitThreshold: 943718
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|26||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 558.0115575910545 } dataWritten: 210043 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:41 [conn] chunk not full enough to trigger auto-split { a: 557.3288190445985 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } dataWritten: 210221 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 259.4523390423955 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } dataWritten: 210359 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 124 version: 2|55||4fd97a3b0d2fef4d6a507be2 based on: 2|53||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|1||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 12.55217658236718 } on: { a: 5.826356493812579 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|55, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } dataWritten: 210579 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 125 version: 2|57||4fd97a3b0d2fef4d6a507be2 based on: 2|55||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 938.1160661714987 } on: { a: 933.0462189495814 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|57, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 254.1395685736485 } dataWritten: 209849 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 252.9882117985707 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|159||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 955.9182567868356 } dataWritten: 210771 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 952.8675486922867 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|108||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 882.331873780809 } dataWritten: 209951 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 126 version: 2|59||4fd97a3b0d2fef4d6a507be2 based on: 2|57||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|108||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 882.331873780809 } on: { a: 873.8718881199745 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|59, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 210682 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 298.9257438311459 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|56||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 933.0462189495814 } dataWritten: 210618 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 932.9607436223002 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } dataWritten: 209772 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 127 version: 2|61||4fd97a3b0d2fef4d6a507be2 based on: 2|59||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|111||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 827.5642418995561 } on: { a: 821.178966084225 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|61, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|184||000000000000000000000000 min: { a: 463.2766201180535 } max: { a: 473.1445991105042 } dataWritten: 210636 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 469.0735946770002 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|89||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 623.3985075048967 } dataWritten: 210335 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 128 version: 2|63||4fd97a3b0d2fef4d6a507be2 based on: 2|61||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|89||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 623.3985075048967 } on: { a: 615.3266278873516 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|63, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|31||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 840.7121644073931 } dataWritten: 210068 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 838.9029996552817 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } dataWritten: 210652 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 259.3869643721212 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|53||000000000000000000000000 min: { a: 521.3538677091974 } max: { a: 526.919018850918 } dataWritten: 210672 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 526.2108153813477 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|150||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 714.0536251380356 } dataWritten: 209891 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 129 version: 2|65||4fd97a3b0d2fef4d6a507be2 based on: 2|63||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|150||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 714.0536251380356 } on: { a: 708.8986861220777 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|65, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|39||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 378.3565272980204 } dataWritten: 210166 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 374.4405728441143 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|139||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 92.91917824556573 } dataWritten: 210139 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 88.79144148769214 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 210583 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 757.5221190619102 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|15||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 427.2300955074828 } dataWritten: 210169 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 130 version: 2|67||4fd97a3b0d2fef4d6a507be2 based on: 2|65||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|15||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 427.2300955074828 } on: { a: 422.4151431966537 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|67, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 191855 splitThreshold: 943718
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } dataWritten: 210685 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 511.3156070463811 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|165||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 74.43717892117874 } dataWritten: 209937 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 71.57746266610809 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } dataWritten: 210562 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 342.4499036030165 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|49||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 998.3975234740553 } dataWritten: 210666 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 996.5234314349742 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|44||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 970.39026226179 } dataWritten: 210330 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 969.8971666142996 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|57||000000000000000000000000 min: { a: 933.0462189495814 } max: { a: 938.1160661714987 } dataWritten: 210282 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 937.8817777853236 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|175||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 848.2332478721062 } dataWritten: 210412 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 845.4391338953969 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|35||000000000000000000000000 min: { a: 910.9608546053483 } max: { a: 918.4259760765641 } dataWritten: 210122 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 916.3677051452743 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|40||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 269.785248844529 } dataWritten: 209908 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 268.9521054168194 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|125||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 363.6779080113047 } dataWritten: 210053 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 131 version: 2|69||4fd97a3b0d2fef4d6a507be2 based on: 2|67||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|125||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 363.6779080113047 } on: { a: 358.3343339611492 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|69, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|56||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 933.0462189495814 } dataWritten: 210087 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 932.703833452666 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|121||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 948.0165404542549 } dataWritten: 210044 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 132 version: 2|71||4fd97a3b0d2fef4d6a507be2 based on: 2|69||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:42 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|121||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 948.0165404542549 } on: { a: 943.2489828660326 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|71, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|37||000000000000000000000000 min: { a: 383.7239757530736 } max: { a: 392.8718206829087 } dataWritten: 209976 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 388.524910296752 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|164||000000000000000000000000 min: { a: 167.6382092456179 } max: { a: 176.0230312595962 } dataWritten: 210073 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 172.724249827696 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|51||000000000000000000000000 min: { a: 321.3459727153073 } max: { a: 327.5292321238884 } dataWritten: 210149 splitThreshold: 1048576
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 326.2231773359692 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 417.3437896431063 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 417.3437896431063 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 417.3437896431063 }, max: { a: 427.2300955074828 }, from: "shard0001", splitKeys: [ { a: 422.4151431966537 } ], shardId: "test.foo-a_417.3437896431063", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee035
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|65||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-127", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682381), what: "split", ns: "test.foo", details: { before: { min: { a: 417.3437896431063 }, max: { a: 427.2300955074828 }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 66.37486853611429 } -->> { : 74.43717892117874 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 970.39026226179 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 933.0462189495814 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 848.2332478721062 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 910.9608546053483 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 269.785248844529 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 353.2720479801309 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 353.2720479801309 }, max: { a: 363.6779080113047 }, from: "shard0001", splitKeys: [ { a: 358.3343339611492 } ], shardId: "test.foo-a_353.2720479801309", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee036
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|67||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-128", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682508), what: "split", ns: "test.foo", details: { before: { min: { a: 353.2720479801309 }, max: { a: 363.6779080113047 }, lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 933.0462189495814 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:44:42 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 938.1160661714987 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:44:42 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 938.1160661714987 }, max: { a: 948.0165404542549 }, from: "shard0001", splitKeys: [ { a: 943.2489828660326 } ], shardId: "test.foo-a_938.1160661714987", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:42 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4a32a28802daeee037
m30001| Thu Jun 14 01:44:42 [conn2] splitChunk accepted at version 2|69||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:42 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:42-129", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652682545), what: "split", ns: "test.foo", details: { before: { min: { a: 938.1160661714987 }, max: { a: 948.0165404542549 }, lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:44:42 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 383.7239757530736 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 167.6382092456179 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 321.3459727153073 } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:44:42 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 210674 splitThreshold: 1048576
m30001| Thu Jun 14 01:44:42 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30999| Thu Jun 14 01:44:42 [conn] chunk not full enough to trigger auto-split { a: 208.8683092615756 }
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:44:42 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:44 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:44:44 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:44 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a4c0d2fef4d6a507be4" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a450d2fef4d6a507be3" } }
m30999| Thu Jun 14 01:44:44 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a4c0d2fef4d6a507be4
m30999| Thu Jun 14 01:44:44 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:44:44 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:44:44 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:44 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:44 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:44:44 [Balancer] shard0000
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:44 [Balancer] shard0001
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 2000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 2000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 2000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 2000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 2000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 2000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 2000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 2000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 2000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 2000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 2000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 2000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 2000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] ----
m30999| Thu Jun 14 01:44:44 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:44:44 [Balancer] donor : 127 chunks on shard0001
m30999| Thu Jun 14 01:44:44 [Balancer] receiver : 1 chunks on shard0000
m30999| Thu Jun 14 01:44:44 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 2000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:44 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 2|54||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 5.826356493812579 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:44:44 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:44 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:44 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a4c32a28802daeee038
m30001| Thu Jun 14 01:44:44 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:44-130", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652684716), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:44 [conn2] moveChunk request accepted at version 2|71||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:44 [conn2] moveChunk number of documents: 521
m30000| Thu Jun 14 01:44:44 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 0.07367152018367129 } -> { a: 5.826356493812579 }
m30001| Thu Jun 14 01:44:45 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 521, clonedBytes: 554865, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Thu Jun 14 01:44:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 0.07367152018367129 } -> { a: 5.826356493812579 }
m30000| Thu Jun 14 01:44:45 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:45-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652685748), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 48, step4 of 5: 0, step5 of 5: 981 } }
m30999| Thu Jun 14 01:44:45 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:44:45 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 133 version: 3|1||4fd97a3b0d2fef4d6a507be2 based on: 2|71||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:45 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:44:45 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30001| Thu Jun 14 01:44:45 [conn2] moveChunk setting version to: 3|0||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:45 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 521, clonedBytes: 554865, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:44:45 [conn2] moveChunk updating self version to: 3|1||4fd97a3b0d2fef4d6a507be2 through { a: 5.826356493812579 } -> { a: 12.55217658236718 } for collection 'test.foo'
m30001| Thu Jun 14 01:44:45 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:45-131", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652685752), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:45 [conn2] forking for cleaning up chunk data
m30001| Thu Jun 14 01:44:45 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:45 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:45-132", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652685753), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 5, step4 of 6: 1012, step5 of 6: 18, step6 of 6: 0 } }
m30001| Thu Jun 14 01:44:45 [conn2] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 W:71 r:547159 w:1311972 reslen:37 1037ms
m30001| Thu Jun 14 01:44:45 [cleanupOldData] (start) waiting to cleanup test.foo from { a: 0.07367152018367129 } -> { a: 5.826356493812579 } # cursors:1
m30001| Thu Jun 14 01:44:45 [cleanupOldData] (looping 1) waiting to cleanup test.foo from { a: 0.07367152018367129 } -> { a: 5.826356493812579 } # cursors:1
m30001| Thu Jun 14 01:44:45 [cleanupOldData] cursors: 7679936579011722313
m30001| Thu Jun 14 01:44:46 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.4, size: 256MB, took 6.67 secs
Count is 100000
m30999| Thu Jun 14 01:44:47 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:44:47 [conn] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:44:47 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mr.foo_0_inc
m30000| Thu Jun 14 01:44:47 [conn7] build index test.tmp.mr.foo_0_inc { 0: 1 }
m30000| Thu Jun 14 01:44:47 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mr.foo_0
m30000| Thu Jun 14 01:44:47 [conn7] build index test.tmp.mr.foo_0 { _id: 1 }
m30000| Thu Jun 14 01:44:47 [conn7] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:47 [cleanupOldData] moveChunk deleted: 521
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mrs.foo_1339652687_0
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mr.foo_0
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mr.foo_0
m30000| Thu Jun 14 01:44:47 [conn7] CMD: drop test.tmp.mr.foo_0_inc
m30999| Thu Jun 14 01:44:47 [conn] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:44:47 [conn3] CMD: drop test.tmp.mr.foo_0_inc
m30001| Thu Jun 14 01:44:47 [conn3] build index test.tmp.mr.foo_0_inc { 0: 1 }
m30001| Thu Jun 14 01:44:47 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:47 [conn3] CMD: drop test.tmp.mr.foo_0
m30001| Thu Jun 14 01:44:47 [conn3] build index test.tmp.mr.foo_0 { _id: 1 }
m30001| Thu Jun 14 01:44:47 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:44:50 [conn3] 28500/99471 28%
m30999| Thu Jun 14 01:44:50 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:44:50 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:50 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a520d2fef4d6a507be5" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a4c0d2fef4d6a507be4" } }
m30999| Thu Jun 14 01:44:50 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a520d2fef4d6a507be5
m30999| Thu Jun 14 01:44:50 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:44:50 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:44:50 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:50 [Balancer] shard0001 maxSize: 0 currSize: 256 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:50 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:44:50 [Balancer] shard0000
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:50 [Balancer] shard0001
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 2000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 2000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 2000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 2000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 2000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 2000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 2000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 2000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 2000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 2000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 2000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] ----
m30999| Thu Jun 14 01:44:50 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:44:50 [Balancer] donor : 126 chunks on shard0001
m30999| Thu Jun 14 01:44:50 [Balancer] receiver : 2 chunks on shard0000
m30999| Thu Jun 14 01:44:50 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:50 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 3|1||000000000000000000000000 min: { a: 5.826356493812579 } max: { a: 12.55217658236718 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Thu Jun 14 01:44:50 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_5.826356493812579", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:50 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:50 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a5232a28802daeee039
m30001| Thu Jun 14 01:44:50 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:50-133", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652690806), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:50 [conn2] moveChunk request accepted at version 3|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:50 [conn2] moveChunk number of documents: 623
m30000| Thu Jun 14 01:44:50 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 5.826356493812579 } -> { a: 12.55217658236718 }
m30001| Thu Jun 14 01:44:51 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 623, clonedBytes: 663495, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:44:51 [conn2] moveChunk setting version to: 4|0||4fd97a3b0d2fef4d6a507be2
m30000| Thu Jun 14 01:44:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 5.826356493812579 } -> { a: 12.55217658236718 }
m30000| Thu Jun 14 01:44:51 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:51-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652691828), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 37, step4 of 5: 0, step5 of 5: 981 } }
m30001| Thu Jun 14 01:44:51 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 623, clonedBytes: 663495, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:44:51 [conn2] moveChunk updating self version to: 4|1||4fd97a3b0d2fef4d6a507be2 through { a: 12.55217658236718 } -> { a: 25.60273139230473 } for collection 'test.foo'
m30001| Thu Jun 14 01:44:51 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:51-134", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652691833), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:51 [conn2] forking for cleaning up chunk data
m30999| Thu Jun 14 01:44:51 [Balancer] moveChunk result: { ok: 1.0 }
m30001| Thu Jun 14 01:44:51 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:51 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:51-135", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652691833), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, step1 of 6: 0, step2 of 6: 7, step3 of 6: 1, step4 of 6: 1005, step5 of 6: 18, step6 of 6: 0 } }
m30001| Thu Jun 14 01:44:51 [conn2] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_5.826356493812579", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 W:71 r:770734 w:1312091 reslen:37 1033ms
m30999| Thu Jun 14 01:44:51 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 134 version: 4|1||4fd97a3b0d2fef4d6a507be2 based on: 3|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:44:51 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:44:51 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30001| Thu Jun 14 01:44:51 [cleanupOldData] (start) waiting to cleanup test.foo from { a: 5.826356493812579 } -> { a: 12.55217658236718 } # cursors:2
m30001| Thu Jun 14 01:44:51 [cleanupOldData] (looping 1) waiting to cleanup test.foo from { a: 5.826356493812579 } -> { a: 12.55217658236718 } # cursors:2
m30001| Thu Jun 14 01:44:51 [cleanupOldData] cursors: 2417047331402008055 7907877773380011536
m30001| Thu Jun 14 01:44:52 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.5, filling with zeroes...
m30001| Thu Jun 14 01:44:53 [conn3] 59500/99471 59%
m30001| Thu Jun 14 01:44:56 [conn3] 69500/99471 69%
m30999| Thu Jun 14 01:44:56 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:44:56 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:44:56 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a580d2fef4d6a507be6" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a520d2fef4d6a507be5" } }
m30999| Thu Jun 14 01:44:56 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a580d2fef4d6a507be6
m30999| Thu Jun 14 01:44:56 [Balancer] *** start balancing round
m30001| Thu Jun 14 01:44:56 [cleanupOldData] (looping 201) waiting to cleanup test.foo from { a: 5.826356493812579 } -> { a: 12.55217658236718 } # cursors:2
m30001| Thu Jun 14 01:44:56 [cleanupOldData] cursors: 2417047331402008055 7907877773380011536
m30999| Thu Jun 14 01:44:57 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:44:57 [Balancer] shard0000 maxSize: 0 currSize: 64 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:57 [Balancer] shard0001 maxSize: 0 currSize: 512 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:44:57 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:44:57 [Balancer] shard0000
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:44:57 [Balancer] shard0001
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 2000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 2000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 2000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 2000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 2000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 2000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 2000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 2000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 2000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 2000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 2000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] ----
m30999| Thu Jun 14 01:44:57 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:44:57 [Balancer] donor : 125 chunks on shard0001
m30999| Thu Jun 14 01:44:57 [Balancer] receiver : 3 chunks on shard0000
m30999| Thu Jun 14 01:44:57 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:44:57 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 25.60273139230473 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30999| Thu Jun 14 01:44:57 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1402284, errmsg: "chunk too big to move", ok: 0.0 }
m30999| Thu Jun 14 01:44:57 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1402284, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: Assertion: 13655:BSONElement: bad type 111
m30001| Thu Jun 14 01:44:57 [conn2] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 W:71 r:770770 w:1312091 reslen:1729 251ms
m30001| Thu Jun 14 01:44:57 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_12.55217658236718", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:44:57 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:44:57 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a5932a28802daeee03a
m30001| Thu Jun 14 01:44:57 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:57-136", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652697095), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:44:57 [conn2] moveChunk request accepted at version 4|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:44:57 [conn2] warning: can't move chunk of size (approximately) 1402284 because maximum size allowed to move is 1048576 ns: test.foo { a: 12.55217658236718 } -> { a: 25.60273139230473 }
m30001| Thu Jun 14 01:44:57 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:44:57 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:44:57-137", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652697105), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, step1 of 6: 0, step2 of 6: 2, note: "aborted" } }
m30999| Thu Jun 14 01:44:57 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:44:57 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30999| 0x84f514a 0x8126495 0x83f3537 0x811e4ce 0x8121cf1 0x8488fac 0x82c589c 0x8128991 0x82c32b3 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x9d4542 0x40db6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEi+0x20e) [0x811e4ce]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo7BSONObj8toStringERNS_17StringBuilderImplINS_16TrivialAllocatorEEEbbi+0xf1) [0x8121cf1]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14BalancerPolicy9ChunkInfo8toStringEv+0x7c) [0x8488fac]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo14LazyStringImplINS_14BalancerPolicy9ChunkInfoEE3valEv+0x2c) [0x82c589c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo9LogstreamlsERKNS_10LazyStringE+0x31) [0x8128991]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x853) [0x82c32b3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x9d4542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x40db6e]
m30999| Thu Jun 14 01:44:57 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:44:57 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:44:57 [Balancer] caught exception while doing balance: BSONElement: bad type 111
m30999| Thu Jun 14 01:44:57 [Balancer] *** End of balancing round
m30000| Thu Jun 14 01:44:57 [conn6] end connection 127.0.0.1:60392 (14 connections now open)
m30001| Thu Jun 14 01:44:59 [conn3] 82800/99471 83%
m30001| Thu Jun 14 01:45:01 [cleanupOldData] (looping 401) waiting to cleanup test.foo from { a: 5.826356493812579 } -> { a: 12.55217658236718 } # cursors:2
m30001| Thu Jun 14 01:45:01 [cleanupOldData] cursors: 2417047331402008055 7907877773380011536
m30001| Thu Jun 14 01:45:04 [conn3] 28300/99471 28%
m30001| Thu Jun 14 01:45:06 [cleanupOldData] (looping 601) waiting to cleanup test.foo from { a: 5.826356493812579 } -> { a: 12.55217658236718 } # cursors:1
m30001| Thu Jun 14 01:45:06 [cleanupOldData] cursors: 2417047331402008055
m30001| Thu Jun 14 01:45:06 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.5, size: 511MB, took 14.206 secs
m30001| Thu Jun 14 01:45:07 [conn3] 73700/99471 74%
m30001| Thu Jun 14 01:45:08 [conn3] CMD: drop test.tmp.mrs.foo_1339652687_0
m30001| Thu Jun 14 01:45:08 [conn3] CMD: drop test.tmp.mr.foo_0
m30001| Thu Jun 14 01:45:08 [conn3] request split points lookup for chunk test.tmp.mrs.foo_1339652687_0 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:08 [conn3] warning: Finding the split vector for test.tmp.mrs.foo_1339652687_0 over { _id: 1 } keyCount: 483 numSplits: 205 lookedAt: 251 took 155ms
m30001| Thu Jun 14 01:45:08 [conn3] command admin.$cmd command: { splitVector: "test.tmp.mrs.foo_1339652687_0", keyPattern: { _id: 1 }, maxChunkSizeBytes: 1048576 } ntoreturn:1 keyUpdates:0 locks(micros) r:155236 reslen:5478 155ms
m30001| Thu Jun 14 01:45:08 [conn3] CMD: drop test.tmp.mr.foo_0
m30001| Thu Jun 14 01:45:08 [conn3] CMD: drop test.tmp.mr.foo_0_inc
m30001| Thu Jun 14 01:45:08 [conn3] command test.$cmd command: { mapreduce: "foo", map: function map2() {
m30001| emit(this._id, {count:1, y:this.y});
m30001| }, reduce: function reduce2(key, values) {
m30001| return values[0];
m30001| }, out: "tmp.mrs.foo_1339652687_0", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 100466 locks(micros) W:2812 r:7859300 w:15282036 reslen:5589 20925ms
m30999| Thu Jun 14 01:45:08 [conn] MR with sharded output, NS=test.mrShardedOut
m30999| Thu Jun 14 01:45:08 [conn] enable sharding on: test.mrShardedOut with shard key: { _id: 1 }
m30999| Thu Jun 14 01:45:08 [conn] going to create 206 chunk(s) for: test.mrShardedOut using new epoch 4fd97a640d2fef4d6a507be7
m30000| Thu Jun 14 01:45:08 [conn10] build index test.mrShardedOut { _id: 1 }
m30000| Thu Jun 14 01:45:08 [conn10] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:45:08 [conn10] info: creating collection test.mrShardedOut on add index
m30001| Thu Jun 14 01:45:08 [conn2] build index test.mrShardedOut { _id: 1 }
m30001| Thu Jun 14 01:45:08 [conn2] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:45:08 [conn2] info: creating collection test.mrShardedOut on add index
m30999| Thu Jun 14 01:45:08 [conn] ChunkManager: time to load chunks for test.mrShardedOut: 4ms sequenceNumber: 135 version: 1|205||4fd97a640d2fef4d6a507be7 based on: (empty)
m30001| Thu Jun 14 01:45:08 [cleanupOldData] moveChunk deleted: 623
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion shard0000 localhost:30000 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|205, versionEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mrShardedOut", need_authoritative: true, errmsg: "first time for collection 'test.mrShardedOut'", ok: 0.0 }
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion shard0000 localhost:30000 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|205, versionEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30000| Thu Jun 14 01:45:08 [conn7] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion shard0001 localhost:30001 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|204, versionEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mrShardedOut", need_authoritative: true, errmsg: "first time for collection 'test.mrShardedOut'", ok: 0.0 }
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion shard0001 localhost:30001 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|204, versionEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30001| Thu Jun 14 01:45:08 [conn3] no current chunk manager found for this shard, will initialize
m30999| Thu Jun 14 01:45:08 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Thu Jun 14 01:45:08 [conn] created new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:45:08 [conn] inserting initial doc in config.locks for lock test.mrShardedOut
m30999| Thu Jun 14 01:45:08 [conn] about to acquire distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:conn:512508528",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:45:08 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97a640d2fef4d6a507be8" } }
m30999| { "_id" : "test.mrShardedOut",
m30999| "state" : 0 }
m30999| Thu Jun 14 01:45:08 [conn] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a640d2fef4d6a507be8
m30001| Thu Jun 14 01:45:08 [conn3] CMD: drop test.tmp.mr.foo_1
m30000| Thu Jun 14 01:45:08 [conn7] CMD: drop test.tmp.mr.foo_1
m30000| Thu Jun 14 01:45:08 [conn7] build index test.tmp.mr.foo_1 { _id: 1 }
m30001| Thu Jun 14 01:45:08 [conn3] build index test.tmp.mr.foo_1 { _id: 1 }
m30000| Thu Jun 14 01:45:08 [conn7] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:45:08 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:60409 #16 (15 connections now open)
m30001| Thu Jun 14 01:45:08 [conn3] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 2 version: 4|1||4fd97a3b0d2fef4d6a507be2 based on: (empty)
m30000| Thu Jun 14 01:45:08 [conn7] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 2 version: 4|1||4fd97a3b0d2fef4d6a507be2 based on: (empty)
m30001| Thu Jun 14 01:45:08 [conn3] ChunkManager: time to load chunks for test.mrShardedOut: 6ms sequenceNumber: 3 version: 1|205||4fd97a640d2fef4d6a507be7 based on: (empty)
m30000| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:60410 #17 (16 connections now open)
m30001| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:48986 #8 (8 connections now open)
m30000| Thu Jun 14 01:45:08 [conn7] ChunkManager: time to load chunks for test.mrShardedOut: 5ms sequenceNumber: 3 version: 1|205||4fd97a640d2fef4d6a507be7 based on: (empty)
m30000| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:60412 #18 (17 connections now open)
m30001| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:48988 #9 (9 connections now open)
m30001| Thu Jun 14 01:45:08 [initandlisten] connection accepted from 127.0.0.1:48989 #10 (10 connections now open)
m30000| Thu Jun 14 01:45:09 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.2, filling with zeroes...
m30000| Thu Jun 14 01:45:12 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.2, size: 64MB, took 3.863 secs
m30000| Thu Jun 14 01:45:13 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.3, filling with zeroes...
m30000| WARNING: mongod wrote null bytes to output
m30000| Thu Jun 14 01:45:15 [conn7] CMD: drop test.mrShardedOut
m30000| Thu Jun 14 01:45:15 [conn7] CMD: drop test.tmp.mr.foo_1
m30000| Thu Jun 14 01:45:15 [conn7] CMD: drop test.tmp.mr.foo_1
m30000| Thu Jun 14 01:45:15 [conn7] CMD: drop test.tmp.mr.foo_1
m30001| Thu Jun 14 01:45:15 [conn3] CMD: drop test.mrShardedOut
m30001| Thu Jun 14 01:45:15 [conn3] CMD: drop test.tmp.mr.foo_1
m30001| Thu Jun 14 01:45:15 [conn3] CMD: drop test.tmp.mr.foo_1
m30001| Thu Jun 14 01:45:15 [conn3] CMD: drop test.tmp.mr.foo_1
m30001| WARNING: mongod wrote null bytes to output
m30999| Thu Jun 14 01:45:15 [conn] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: ObjectId('4fd97a3c05a35677eff228c8') } dataWritten: 655336 splitThreshold: 943718
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3c05a35677eff228c5') }
m30001| Thu Jun 14 01:45:15 [conn3] warning: log line attempted (10k) over max size(10k), printing beginning and end ... Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : MinKey } -->> { : ObjectId('4fd97a3c05a35677eff228c8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|1||000000000000000000000000 min: { _id: ObjectId('4fd97a3c05a35677eff228c8') } max: { _id: ObjectId('4fd97a3c05a35677eff22aac') } dataWritten: 630286 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn7] warning: log line attempted (10k) over max size(10k), printing beginning and end ... Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3c05a35677eff228c8') } -->> { : ObjectId('4fd97a3c05a35677eff22aac') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3c05a35677eff22aab') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd97a3c05a35677eff22aac') } max: { _id: ObjectId('4fd97a3c05a35677eff22c95') } dataWritten: 700712 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3c05a35677eff22c8f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|3||000000000000000000000000 min: { _id: ObjectId('4fd97a3c05a35677eff22c95') } max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') } dataWritten: 706201 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3c05a35677eff22c95') } -->> { : ObjectId('4fd97a3c05a35677eff22e7b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3c05a35677eff22e78') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3c05a35677eff22aac') } -->> { : ObjectId('4fd97a3c05a35677eff22c95') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') } max: { _id: ObjectId('4fd97a3c05a35677eff2305f') } dataWritten: 654471 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3c05a35677eff2305e') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3c05a35677eff22e7b') } -->> { : ObjectId('4fd97a3c05a35677eff2305f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|5||000000000000000000000000 min: { _id: ObjectId('4fd97a3c05a35677eff2305f') } max: { _id: ObjectId('4fd97a3d05a35677eff23246') } dataWritten: 581301 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3c05a35677eff2305f') } -->> { : ObjectId('4fd97a3d05a35677eff23246') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff23242') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff23246') } max: { _id: ObjectId('4fd97a3d05a35677eff2342c') } dataWritten: 685463 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff23429') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff23246') } -->> { : ObjectId('4fd97a3d05a35677eff2342c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|7||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff2342c') } max: { _id: ObjectId('4fd97a3d05a35677eff23611') } dataWritten: 605537 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff2342c') } -->> { : ObjectId('4fd97a3d05a35677eff23611') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff2360f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff23611') } max: { _id: ObjectId('4fd97a3d05a35677eff237f5') } dataWritten: 607054 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff237f4') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff23611') } -->> { : ObjectId('4fd97a3d05a35677eff237f5') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|9||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff237f5') } max: { _id: ObjectId('4fd97a3d05a35677eff239dc') } dataWritten: 624821 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff237f5') } -->> { : ObjectId('4fd97a3d05a35677eff239dc') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff239d8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|10||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff239dc') } max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') } dataWritten: 545517 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff23bbf') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff239dc') } -->> { : ObjectId('4fd97a3d05a35677eff23bc4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|11||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') } max: { _id: ObjectId('4fd97a3d05a35677eff23da9') } dataWritten: 673937 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff23bc4') } -->> { : ObjectId('4fd97a3d05a35677eff23da9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff23da7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|12||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff23da9') } max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') } dataWritten: 614539 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff23da9') } -->> { : ObjectId('4fd97a3d05a35677eff23f8f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff23f8c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|13||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') } max: { _id: ObjectId('4fd97a3d05a35677eff24176') } dataWritten: 534621 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff23f8f') } -->> { : ObjectId('4fd97a3d05a35677eff24176') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24172') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|14||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24176') } max: { _id: ObjectId('4fd97a3d05a35677eff2435d') } dataWritten: 627321 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24176') } -->> { : ObjectId('4fd97a3d05a35677eff2435d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24359') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|15||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff2435d') } max: { _id: ObjectId('4fd97a3d05a35677eff24541') } dataWritten: 731319 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff2435d') } -->> { : ObjectId('4fd97a3d05a35677eff24541') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24540') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|16||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24541') } max: { _id: ObjectId('4fd97a3d05a35677eff24727') } dataWritten: 587586 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24541') } -->> { : ObjectId('4fd97a3d05a35677eff24727') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24724') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|17||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24727') } max: { _id: ObjectId('4fd97a3d05a35677eff2490f') } dataWritten: 623495 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24727') } -->> { : ObjectId('4fd97a3d05a35677eff2490f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff2490a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|18||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff2490f') } max: { _id: ObjectId('4fd97a3d05a35677eff24af4') } dataWritten: 544127 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff2490f') } -->> { : ObjectId('4fd97a3d05a35677eff24af4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24af2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|19||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24af4') } max: { _id: ObjectId('4fd97a3d05a35677eff24cde') } dataWritten: 685769 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24af4') } -->> { : ObjectId('4fd97a3d05a35677eff24cde') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24cd7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|20||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24cde') } max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') } dataWritten: 535923 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24cde') } -->> { : ObjectId('4fd97a3d05a35677eff24ec4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3d05a35677eff24ec1') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|21||000000000000000000000000 min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') } max: { _id: ObjectId('4fd97a3e05a35677eff250ad') } dataWritten: 528926 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3d05a35677eff24ec4') } -->> { : ObjectId('4fd97a3e05a35677eff250ad') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff250a7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|22||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff250ad') } max: { _id: ObjectId('4fd97a3e05a35677eff25295') } dataWritten: 707233 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff250ad') } -->> { : ObjectId('4fd97a3e05a35677eff25295') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25290') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|23||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25295') } max: { _id: ObjectId('4fd97a3e05a35677eff2547d') } dataWritten: 536372 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25295') } -->> { : ObjectId('4fd97a3e05a35677eff2547d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25478') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|24||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff2547d') } max: { _id: ObjectId('4fd97a3e05a35677eff25663') } dataWritten: 637632 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff2547d') } -->> { : ObjectId('4fd97a3e05a35677eff25663') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25660') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|25||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25663') } max: { _id: ObjectId('4fd97a3e05a35677eff2584a') } dataWritten: 719675 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25663') } -->> { : ObjectId('4fd97a3e05a35677eff2584a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25846') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|26||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff2584a') } max: { _id: ObjectId('4fd97a3e05a35677eff25a31') } dataWritten: 583704 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff2584a') } -->> { : ObjectId('4fd97a3e05a35677eff25a31') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25a2d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|27||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25a31') } max: { _id: ObjectId('4fd97a3e05a35677eff25c16') } dataWritten: 637017 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25a31') } -->> { : ObjectId('4fd97a3e05a35677eff25c16') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25c14') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|28||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25c16') } max: { _id: ObjectId('4fd97a3e05a35677eff25e01') } dataWritten: 695291 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25c16') } -->> { : ObjectId('4fd97a3e05a35677eff25e01') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25df9') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|29||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25e01') } max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') } dataWritten: 575408 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25e01') } -->> { : ObjectId('4fd97a3e05a35677eff25fe8') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff25fe4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|30||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') } max: { _id: ObjectId('4fd97a3e05a35677eff261d0') } dataWritten: 687034 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff25fe8') } -->> { : ObjectId('4fd97a3e05a35677eff261d0') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff261cb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|31||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff261d0') } max: { _id: ObjectId('4fd97a3e05a35677eff263b4') } dataWritten: 607980 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff261d0') } -->> { : ObjectId('4fd97a3e05a35677eff263b4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff263b3') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|32||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff263b4') } max: { _id: ObjectId('4fd97a3e05a35677eff26598') } dataWritten: 677199 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff263b4') } -->> { : ObjectId('4fd97a3e05a35677eff26598') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff26597') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|33||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff26598') } max: { _id: ObjectId('4fd97a3e05a35677eff2677e') } dataWritten: 645213 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff26598') } -->> { : ObjectId('4fd97a3e05a35677eff2677e') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3e05a35677eff2677b') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|34||000000000000000000000000 min: { _id: ObjectId('4fd97a3e05a35677eff2677e') } max: { _id: ObjectId('4fd97a3f05a35677eff26964') } dataWritten: 581262 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3e05a35677eff2677e') } -->> { : ObjectId('4fd97a3f05a35677eff26964') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff26961') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|35||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff26964') } max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') } dataWritten: 603076 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff26964') } -->> { : ObjectId('4fd97a3f05a35677eff26b4c') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff26b47') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|36||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') } max: { _id: ObjectId('4fd97a3f05a35677eff26d35') } dataWritten: 701262 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff26b4c') } -->> { : ObjectId('4fd97a3f05a35677eff26d35') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff26d2f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|37||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff26d35') } max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') } dataWritten: 533920 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff26d35') } -->> { : ObjectId('4fd97a3f05a35677eff26f1f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff26f18') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|38||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') } max: { _id: ObjectId('4fd97a3f05a35677eff27105') } dataWritten: 682166 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff26f1f') } -->> { : ObjectId('4fd97a3f05a35677eff27105') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff27102') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|39||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff27105') } max: { _id: ObjectId('4fd97a3f05a35677eff272ec') } dataWritten: 573235 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff27105') } -->> { : ObjectId('4fd97a3f05a35677eff272ec') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff272e8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|40||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff272ec') } max: { _id: ObjectId('4fd97a3f05a35677eff274d5') } dataWritten: 629165 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff272ec') } -->> { : ObjectId('4fd97a3f05a35677eff274d5') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff274cf') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|41||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff274d5') } max: { _id: ObjectId('4fd97a3f05a35677eff276ba') } dataWritten: 697027 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff274d5') } -->> { : ObjectId('4fd97a3f05a35677eff276ba') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff276b8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|42||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff276ba') } max: { _id: ObjectId('4fd97a3f05a35677eff278a1') } dataWritten: 720839 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff276ba') } -->> { : ObjectId('4fd97a3f05a35677eff278a1') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff2789d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|43||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff278a1') } max: { _id: ObjectId('4fd97a3f05a35677eff27a87') } dataWritten: 715096 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff278a1') } -->> { : ObjectId('4fd97a3f05a35677eff27a87') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff27a84') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|44||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff27a87') } max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') } dataWritten: 708444 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff27a87') } -->> { : ObjectId('4fd97a3f05a35677eff27c6f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff27c6a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|45||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') } max: { _id: ObjectId('4fd97a3f05a35677eff27e57') } dataWritten: 613079 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff27c6f') } -->> { : ObjectId('4fd97a3f05a35677eff27e57') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff27e52') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|46||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff27e57') } max: { _id: ObjectId('4fd97a3f05a35677eff2803f') } dataWritten: 715658 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff27e57') } -->> { : ObjectId('4fd97a3f05a35677eff2803f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff2803a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|47||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff2803f') } max: { _id: ObjectId('4fd97a3f05a35677eff28226') } dataWritten: 557821 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff2803f') } -->> { : ObjectId('4fd97a3f05a35677eff28226') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff28222') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|48||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff28226') } max: { _id: ObjectId('4fd97a3f05a35677eff2840d') } dataWritten: 705918 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff28226') } -->> { : ObjectId('4fd97a3f05a35677eff2840d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff28409') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|49||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff2840d') } max: { _id: ObjectId('4fd97a3f05a35677eff285f3') } dataWritten: 733338 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff2840d') } -->> { : ObjectId('4fd97a3f05a35677eff285f3') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a3f05a35677eff285f0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|50||000000000000000000000000 min: { _id: ObjectId('4fd97a3f05a35677eff285f3') } max: { _id: ObjectId('4fd97a4005a35677eff287d7') } dataWritten: 710657 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a3f05a35677eff285f3') } -->> { : ObjectId('4fd97a4005a35677eff287d7') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff287d6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|51||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff287d7') } max: { _id: ObjectId('4fd97a4005a35677eff289bf') } dataWritten: 717556 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff287d7') } -->> { : ObjectId('4fd97a4005a35677eff289bf') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff289ba') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|52||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff289bf') } max: { _id: ObjectId('4fd97a4005a35677eff28ba4') } dataWritten: 730526 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff289bf') } -->> { : ObjectId('4fd97a4005a35677eff28ba4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff28ba2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|53||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff28ba4') } max: { _id: ObjectId('4fd97a4005a35677eff28d8b') } dataWritten: 683890 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff28ba4') } -->> { : ObjectId('4fd97a4005a35677eff28d8b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff28d87') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|54||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff28d8b') } max: { _id: ObjectId('4fd97a4005a35677eff28f71') } dataWritten: 722191 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff28d8b') } -->> { : ObjectId('4fd97a4005a35677eff28f71') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff28f6e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|55||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff28f71') } max: { _id: ObjectId('4fd97a4005a35677eff29159') } dataWritten: 636321 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff28f71') } -->> { : ObjectId('4fd97a4005a35677eff29159') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29154') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|56||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29159') } max: { _id: ObjectId('4fd97a4005a35677eff2933f') } dataWritten: 666323 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29159') } -->> { : ObjectId('4fd97a4005a35677eff2933f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff2933c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|57||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff2933f') } max: { _id: ObjectId('4fd97a4005a35677eff29523') } dataWritten: 567571 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff2933f') } -->> { : ObjectId('4fd97a4005a35677eff29523') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29522') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|58||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29523') } max: { _id: ObjectId('4fd97a4005a35677eff29708') } dataWritten: 534047 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29523') } -->> { : ObjectId('4fd97a4005a35677eff29708') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29706') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|59||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29708') } max: { _id: ObjectId('4fd97a4005a35677eff298ed') } dataWritten: 620047 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29708') } -->> { : ObjectId('4fd97a4005a35677eff298ed') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff298eb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|60||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff298ed') } max: { _id: ObjectId('4fd97a4005a35677eff29ad4') } dataWritten: 619775 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff298ed') } -->> { : ObjectId('4fd97a4005a35677eff29ad4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29ad0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|61||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29ad4') } max: { _id: ObjectId('4fd97a4005a35677eff29cba') } dataWritten: 692586 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29ad4') } -->> { : ObjectId('4fd97a4005a35677eff29cba') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29cb7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|62||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29cba') } max: { _id: ObjectId('4fd97a4005a35677eff29e9f') } dataWritten: 704823 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29cba') } -->> { : ObjectId('4fd97a4005a35677eff29e9f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff29e9d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|63||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff29e9f') } max: { _id: ObjectId('4fd97a4005a35677eff2a086') } dataWritten: 564055 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff29e9f') } -->> { : ObjectId('4fd97a4005a35677eff2a086') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff2a082') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|64||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff2a086') } max: { _id: ObjectId('4fd97a4005a35677eff2a26b') } dataWritten: 601637 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff2a086') } -->> { : ObjectId('4fd97a4005a35677eff2a26b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff2a269') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|65||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff2a26b') } max: { _id: ObjectId('4fd97a4005a35677eff2a450') } dataWritten: 548957 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff2a26b') } -->> { : ObjectId('4fd97a4005a35677eff2a450') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4005a35677eff2a44e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|66||000000000000000000000000 min: { _id: ObjectId('4fd97a4005a35677eff2a450') } max: { _id: ObjectId('4fd97a4105a35677eff2a636') } dataWritten: 636474 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4005a35677eff2a450') } -->> { : ObjectId('4fd97a4105a35677eff2a636') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2a633') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|67||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2a636') } max: { _id: ObjectId('4fd97a4105a35677eff2a81d') } dataWritten: 566738 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2a636') } -->> { : ObjectId('4fd97a4105a35677eff2a81d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2a819') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|68||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2a81d') } max: { _id: ObjectId('4fd97a4105a35677eff2aa03') } dataWritten: 554268 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2a81d') } -->> { : ObjectId('4fd97a4105a35677eff2aa03') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2aa00') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|69||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2aa03') } max: { _id: ObjectId('4fd97a4105a35677eff2abea') } dataWritten: 584641 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2aa03') } -->> { : ObjectId('4fd97a4105a35677eff2abea') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2abe6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|70||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2abea') } max: { _id: ObjectId('4fd97a4105a35677eff2add0') } dataWritten: 610397 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2abea') } -->> { : ObjectId('4fd97a4105a35677eff2add0') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2adcd') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|71||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2add0') } max: { _id: ObjectId('4fd97a4105a35677eff2afb8') } dataWritten: 656987 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2add0') } -->> { : ObjectId('4fd97a4105a35677eff2afb8') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2afb3') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|72||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2afb8') } max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') } dataWritten: 546701 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2afb8') } -->> { : ObjectId('4fd97a4105a35677eff2b1a0') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2b19b') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|73||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') } max: { _id: ObjectId('4fd97a4105a35677eff2b387') } dataWritten: 596156 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2b1a0') } -->> { : ObjectId('4fd97a4105a35677eff2b387') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2b383') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|74||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2b387') } max: { _id: ObjectId('4fd97a4105a35677eff2b56f') } dataWritten: 637002 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2b387') } -->> { : ObjectId('4fd97a4105a35677eff2b56f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2b56a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|75||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2b56f') } max: { _id: ObjectId('4fd97a4105a35677eff2b757') } dataWritten: 727617 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2b56f') } -->> { : ObjectId('4fd97a4105a35677eff2b757') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2b752') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|76||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2b757') } max: { _id: ObjectId('4fd97a4105a35677eff2b93b') } dataWritten: 676416 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2b757') } -->> { : ObjectId('4fd97a4105a35677eff2b93b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2b93a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|77||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2b93b') } max: { _id: ObjectId('4fd97a4105a35677eff2bb23') } dataWritten: 613369 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2b93b') } -->> { : ObjectId('4fd97a4105a35677eff2bb23') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2bb1e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|78||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2bb23') } max: { _id: ObjectId('4fd97a4105a35677eff2bd07') } dataWritten: 544952 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2bb23') } -->> { : ObjectId('4fd97a4105a35677eff2bd07') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4105a35677eff2bd06') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|79||000000000000000000000000 min: { _id: ObjectId('4fd97a4105a35677eff2bd07') } max: { _id: ObjectId('4fd97a4205a35677eff2beee') } dataWritten: 649415 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4105a35677eff2bd07') } -->> { : ObjectId('4fd97a4205a35677eff2beee') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2beea') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|80||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2beee') } max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') } dataWritten: 607416 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2beee') } -->> { : ObjectId('4fd97a4205a35677eff2c0d4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2c0d1') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|81||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') } max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') } dataWritten: 735648 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2c0d4') } -->> { : ObjectId('4fd97a4205a35677eff2c2bb') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2c2b7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|82||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') } max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') } dataWritten: 627681 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2c2bb') } -->> { : ObjectId('4fd97a4205a35677eff2c4a2') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2c49e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|83||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') } max: { _id: ObjectId('4fd97a4205a35677eff2c687') } dataWritten: 602862 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2c4a2') } -->> { : ObjectId('4fd97a4205a35677eff2c687') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2c685') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|84||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2c687') } max: { _id: ObjectId('4fd97a4205a35677eff2c86f') } dataWritten: 682410 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2c687') } -->> { : ObjectId('4fd97a4205a35677eff2c86f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2c86a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|85||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2c86f') } max: { _id: ObjectId('4fd97a4205a35677eff2ca54') } dataWritten: 612629 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2c86f') } -->> { : ObjectId('4fd97a4205a35677eff2ca54') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2ca52') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|86||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2ca54') } max: { _id: ObjectId('4fd97a4205a35677eff2cc39') } dataWritten: 711655 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2ca54') } -->> { : ObjectId('4fd97a4205a35677eff2cc39') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2cc37') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|87||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2cc39') } max: { _id: ObjectId('4fd97a4205a35677eff2ce20') } dataWritten: 612571 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2cc39') } -->> { : ObjectId('4fd97a4205a35677eff2ce20') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2ce1c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|88||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2ce20') } max: { _id: ObjectId('4fd97a4205a35677eff2d008') } dataWritten: 660239 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2ce20') } -->> { : ObjectId('4fd97a4205a35677eff2d008') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2d003') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|89||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2d008') } max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') } dataWritten: 723579 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2d008') } -->> { : ObjectId('4fd97a4205a35677eff2d1ef') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2d1eb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') } max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') } dataWritten: 705204 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2d1ef') } -->> { : ObjectId('4fd97a4205a35677eff2d3d5') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2d3d2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|91||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') } max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') } dataWritten: 540723 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2d3d5') } -->> { : ObjectId('4fd97a4205a35677eff2d5bc') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2d5b8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|92||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') } max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') } dataWritten: 678922 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2d5bc') } -->> { : ObjectId('4fd97a4205a35677eff2d7a1') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4205a35677eff2d79f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|93||000000000000000000000000 min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') } max: { _id: ObjectId('4fd97a4305a35677eff2d986') } dataWritten: 672899 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4205a35677eff2d7a1') } -->> { : ObjectId('4fd97a4305a35677eff2d986') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2d984') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|94||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2d986') } max: { _id: ObjectId('4fd97a4305a35677eff2db6f') } dataWritten: 578445 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2d986') } -->> { : ObjectId('4fd97a4305a35677eff2db6f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2db69') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|95||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2db6f') } max: { _id: ObjectId('4fd97a4305a35677eff2dd54') } dataWritten: 544512 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2db6f') } -->> { : ObjectId('4fd97a4305a35677eff2dd54') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2dd52') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|96||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2dd54') } max: { _id: ObjectId('4fd97a4305a35677eff2df3e') } dataWritten: 702976 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2dd54') } -->> { : ObjectId('4fd97a4305a35677eff2df3e') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2df37') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|97||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2df3e') } max: { _id: ObjectId('4fd97a4305a35677eff2e127') } dataWritten: 689554 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2df3e') } -->> { : ObjectId('4fd97a4305a35677eff2e127') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2e121') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|98||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2e127') } max: { _id: ObjectId('4fd97a4305a35677eff2e30d') } dataWritten: 583836 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2e127') } -->> { : ObjectId('4fd97a4305a35677eff2e30d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2e30a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|99||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2e30d') } max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') } dataWritten: 724425 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2e30d') } -->> { : ObjectId('4fd97a4305a35677eff2e4f2') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2e4f0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|100||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') } max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') } dataWritten: 532742 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2e4f2') } -->> { : ObjectId('4fd97a4305a35677eff2e6d8') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2e6d5') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|101||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') } max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') } dataWritten: 669948 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2e6d8') } -->> { : ObjectId('4fd97a4305a35677eff2e8bf') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2e8bb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|102||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') } max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') } dataWritten: 645250 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2e8bf') } -->> { : ObjectId('4fd97a4305a35677eff2eaa5') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2eaa2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|103||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') } max: { _id: ObjectId('4fd97a4305a35677eff2ec89') } dataWritten: 549753 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2eaa5') } -->> { : ObjectId('4fd97a4305a35677eff2ec89') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2ec88') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|104||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2ec89') } max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') } dataWritten: 526699 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2ec89') } -->> { : ObjectId('4fd97a4305a35677eff2ee6d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2ee6c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|105||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') } max: { _id: ObjectId('4fd97a4305a35677eff2f052') } dataWritten: 541880 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2ee6d') } -->> { : ObjectId('4fd97a4305a35677eff2f052') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f050') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|106||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f052') } max: { _id: ObjectId('4fd97a4305a35677eff2f239') } dataWritten: 543370 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f235') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f052') } -->> { : ObjectId('4fd97a4305a35677eff2f239') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|107||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f239') } max: { _id: ObjectId('4fd97a4305a35677eff2f41f') } dataWritten: 680026 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f239') } -->> { : ObjectId('4fd97a4305a35677eff2f41f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f41c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|108||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f41f') } max: { _id: ObjectId('4fd97a4305a35677eff2f603') } dataWritten: 626640 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f41f') } -->> { : ObjectId('4fd97a4305a35677eff2f603') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f602') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|109||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f603') } max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') } dataWritten: 559828 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f603') } -->> { : ObjectId('4fd97a4305a35677eff2f7e7') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f7e6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|110||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') } max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') } dataWritten: 593279 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f7e7') } -->> { : ObjectId('4fd97a4305a35677eff2f9cd') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2f9ca') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|111||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') } max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') } dataWritten: 711934 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2f9cd') } -->> { : ObjectId('4fd97a4305a35677eff2fbb4') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2fbb0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|112||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') } max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') } dataWritten: 559428 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2fbb4') } -->> { : ObjectId('4fd97a4305a35677eff2fd9a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2fd97') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|113||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') } max: { _id: ObjectId('4fd97a4305a35677eff2ff82') } dataWritten: 696675 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2fd9a') } -->> { : ObjectId('4fd97a4305a35677eff2ff82') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4305a35677eff2ff7d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|114||000000000000000000000000 min: { _id: ObjectId('4fd97a4305a35677eff2ff82') } max: { _id: ObjectId('4fd97a4405a35677eff3016a') } dataWritten: 579829 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4305a35677eff2ff82') } -->> { : ObjectId('4fd97a4405a35677eff3016a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30165') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|115||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff3016a') } max: { _id: ObjectId('4fd97a4405a35677eff30351') } dataWritten: 715391 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff3016a') } -->> { : ObjectId('4fd97a4405a35677eff30351') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff3034d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|116||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30351') } max: { _id: ObjectId('4fd97a4405a35677eff30537') } dataWritten: 571094 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30351') } -->> { : ObjectId('4fd97a4405a35677eff30537') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30534') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|117||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30537') } max: { _id: ObjectId('4fd97a4405a35677eff30721') } dataWritten: 559646 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30537') } -->> { : ObjectId('4fd97a4405a35677eff30721') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff3071a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|118||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30721') } max: { _id: ObjectId('4fd97a4405a35677eff30907') } dataWritten: 588671 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30721') } -->> { : ObjectId('4fd97a4405a35677eff30907') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30904') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|119||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30907') } max: { _id: ObjectId('4fd97a4405a35677eff30aef') } dataWritten: 705967 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30907') } -->> { : ObjectId('4fd97a4405a35677eff30aef') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30aea') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|120||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30aef') } max: { _id: ObjectId('4fd97a4405a35677eff30cd5') } dataWritten: 540691 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30aef') } -->> { : ObjectId('4fd97a4405a35677eff30cd5') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30cd2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|121||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30cd5') } max: { _id: ObjectId('4fd97a4405a35677eff30ebc') } dataWritten: 557828 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30cd5') } -->> { : ObjectId('4fd97a4405a35677eff30ebc') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff30eb8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|122||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff30ebc') } max: { _id: ObjectId('4fd97a4405a35677eff310a7') } dataWritten: 721438 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff30ebc') } -->> { : ObjectId('4fd97a4405a35677eff310a7') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff3109f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|123||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff310a7') } max: { _id: ObjectId('4fd97a4405a35677eff3128e') } dataWritten: 694361 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff310a7') } -->> { : ObjectId('4fd97a4405a35677eff3128e') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff3128a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|124||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff3128e') } max: { _id: ObjectId('4fd97a4405a35677eff31473') } dataWritten: 704280 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff3128e') } -->> { : ObjectId('4fd97a4405a35677eff31473') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31471') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|125||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31473') } max: { _id: ObjectId('4fd97a4405a35677eff3165b') } dataWritten: 558317 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31473') } -->> { : ObjectId('4fd97a4405a35677eff3165b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31656') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|126||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff3165b') } max: { _id: ObjectId('4fd97a4405a35677eff31841') } dataWritten: 711459 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff3165b') } -->> { : ObjectId('4fd97a4405a35677eff31841') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff3183e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|127||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31841') } max: { _id: ObjectId('4fd97a4405a35677eff31a28') } dataWritten: 667965 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31841') } -->> { : ObjectId('4fd97a4405a35677eff31a28') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31a24') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|128||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31a28') } max: { _id: ObjectId('4fd97a4405a35677eff31c0d') } dataWritten: 716019 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31a28') } -->> { : ObjectId('4fd97a4405a35677eff31c0d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31c0b') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|129||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31c0d') } max: { _id: ObjectId('4fd97a4405a35677eff31df3') } dataWritten: 560214 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31c0d') } -->> { : ObjectId('4fd97a4405a35677eff31df3') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31df0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|130||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31df3') } max: { _id: ObjectId('4fd97a4405a35677eff31fda') } dataWritten: 658390 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff31fd6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|131||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff31fda') } max: { _id: ObjectId('4fd97a4405a35677eff321bf') } dataWritten: 721347 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff321bd') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|132||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff321bf') } max: { _id: ObjectId('4fd97a4405a35677eff323a4') } dataWritten: 702635 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff323a2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|133||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff323a4') } max: { _id: ObjectId('4fd97a4405a35677eff3258c') } dataWritten: 567593 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4405a35677eff32587') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|134||000000000000000000000000 min: { _id: ObjectId('4fd97a4405a35677eff3258c') } max: { _id: ObjectId('4fd97a4505a35677eff32774') } dataWritten: 541424 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff3276f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|135||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff32774') } max: { _id: ObjectId('4fd97a4505a35677eff32958') } dataWritten: 703001 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff32957') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|136||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff32958') } max: { _id: ObjectId('4fd97a4505a35677eff32b3d') } dataWritten: 579897 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff32b3b') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|137||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff32b3d') } max: { _id: ObjectId('4fd97a4505a35677eff32d23') } dataWritten: 554137 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff32d20') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|138||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff32d23') } max: { _id: ObjectId('4fd97a4505a35677eff32f0c') } dataWritten: 653351 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff32f06') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|139||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff32f0c') } max: { _id: ObjectId('4fd97a4505a35677eff330f5') } dataWritten: 685610 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff330ef') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|140||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff330f5') } max: { _id: ObjectId('4fd97a4505a35677eff332d9') } dataWritten: 586551 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff332d8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|141||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff332d9') } max: { _id: ObjectId('4fd97a4505a35677eff334c2') } dataWritten: 721265 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff334bc') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|142||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff334c2') } max: { _id: ObjectId('4fd97a4505a35677eff336ab') } dataWritten: 659334 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff336a5') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31df3') } -->> { : ObjectId('4fd97a4405a35677eff31fda') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff321bf') } -->> { : ObjectId('4fd97a4405a35677eff323a4') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff3258c') } -->> { : ObjectId('4fd97a4505a35677eff32774') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff32958') } -->> { : ObjectId('4fd97a4505a35677eff32b3d') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff32d23') } -->> { : ObjectId('4fd97a4505a35677eff32f0c') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff330f5') } -->> { : ObjectId('4fd97a4505a35677eff332d9') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff334c2') } -->> { : ObjectId('4fd97a4505a35677eff336ab') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|143||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff336ab') } max: { _id: ObjectId('4fd97a4505a35677eff33891') } dataWritten: 622776 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4505a35677eff3388e') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff31fda') } -->> { : ObjectId('4fd97a4405a35677eff321bf') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4405a35677eff323a4') } -->> { : ObjectId('4fd97a4405a35677eff3258c') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff32774') } -->> { : ObjectId('4fd97a4505a35677eff32958') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff32b3d') } -->> { : ObjectId('4fd97a4505a35677eff32d23') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff32f0c') } -->> { : ObjectId('4fd97a4505a35677eff330f5') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|144||000000000000000000000000 min: { _id: ObjectId('4fd97a4505a35677eff33891') } max: { _id: ObjectId('4fd97a4605a35677eff33a77') } dataWritten: 675406 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff332d9') } -->> { : ObjectId('4fd97a4505a35677eff334c2') }
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff336ab') } -->> { : ObjectId('4fd97a4505a35677eff33891') }
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4505a35677eff33891') } -->> { : ObjectId('4fd97a4605a35677eff33a77') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff33a74') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|145||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff33a77') } max: { _id: ObjectId('4fd97a4605a35677eff33c5c') } dataWritten: 707311 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff33a77') } -->> { : ObjectId('4fd97a4605a35677eff33c5c') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff33c5a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|146||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff33c5c') } max: { _id: ObjectId('4fd97a4605a35677eff33e41') } dataWritten: 600924 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff33c5c') } -->> { : ObjectId('4fd97a4605a35677eff33e41') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff33e3f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|147||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff33e41') } max: { _id: ObjectId('4fd97a4605a35677eff34026') } dataWritten: 720054 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff33e41') } -->> { : ObjectId('4fd97a4605a35677eff34026') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff34024') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff34026') } max: { _id: ObjectId('4fd97a4605a35677eff3420d') } dataWritten: 529714 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff34026') } -->> { : ObjectId('4fd97a4605a35677eff3420d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff34209') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|149||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff3420d') } max: { _id: ObjectId('4fd97a4605a35677eff343f3') } dataWritten: 663263 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff3420d') } -->> { : ObjectId('4fd97a4605a35677eff343f3') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff343f0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|150||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff343f3') } max: { _id: ObjectId('4fd97a4605a35677eff345d9') } dataWritten: 687811 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff343f3') } -->> { : ObjectId('4fd97a4605a35677eff345d9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff345d6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|151||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff345d9') } max: { _id: ObjectId('4fd97a4605a35677eff347c1') } dataWritten: 544072 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff345d9') } -->> { : ObjectId('4fd97a4605a35677eff347c1') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff347bc') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|152||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff347c1') } max: { _id: ObjectId('4fd97a4605a35677eff349a9') } dataWritten: 696806 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff347c1') } -->> { : ObjectId('4fd97a4605a35677eff349a9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4605a35677eff349a4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|153||000000000000000000000000 min: { _id: ObjectId('4fd97a4605a35677eff349a9') } max: { _id: ObjectId('4fd97a4705a35677eff34b90') } dataWritten: 669845 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4605a35677eff349a9') } -->> { : ObjectId('4fd97a4705a35677eff34b90') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff34b8c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|154||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff34b90') } max: { _id: ObjectId('4fd97a4705a35677eff34d79') } dataWritten: 711019 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff34b90') } -->> { : ObjectId('4fd97a4705a35677eff34d79') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff34d73') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|155||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff34d79') } max: { _id: ObjectId('4fd97a4705a35677eff34f5f') } dataWritten: 664924 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff34d79') } -->> { : ObjectId('4fd97a4705a35677eff34f5f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff34f5c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff34f5f') } max: { _id: ObjectId('4fd97a4705a35677eff35147') } dataWritten: 701715 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff34f5f') } -->> { : ObjectId('4fd97a4705a35677eff35147') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff35142') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|157||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff35147') } max: { _id: ObjectId('4fd97a4705a35677eff3532c') } dataWritten: 683074 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff35147') } -->> { : ObjectId('4fd97a4705a35677eff3532c') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff3532a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|158||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff3532c') } max: { _id: ObjectId('4fd97a4705a35677eff35511') } dataWritten: 593598 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff3532c') } -->> { : ObjectId('4fd97a4705a35677eff35511') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff3550f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|159||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff35511') } max: { _id: ObjectId('4fd97a4705a35677eff356fa') } dataWritten: 682767 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff35511') } -->> { : ObjectId('4fd97a4705a35677eff356fa') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff356f4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|160||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff356fa') } max: { _id: ObjectId('4fd97a4705a35677eff358e1') } dataWritten: 720084 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff356fa') } -->> { : ObjectId('4fd97a4705a35677eff358e1') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff358dd') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|161||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff358e1') } max: { _id: ObjectId('4fd97a4705a35677eff35ac6') } dataWritten: 723493 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff358e1') } -->> { : ObjectId('4fd97a4705a35677eff35ac6') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff35ac4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|162||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff35ac6') } max: { _id: ObjectId('4fd97a4705a35677eff35cab') } dataWritten: 665790 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff35ac6') } -->> { : ObjectId('4fd97a4705a35677eff35cab') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff35ca9') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|163||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff35cab') } max: { _id: ObjectId('4fd97a4705a35677eff35e91') } dataWritten: 687638 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff35cab') } -->> { : ObjectId('4fd97a4705a35677eff35e91') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4705a35677eff35e8e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|164||000000000000000000000000 min: { _id: ObjectId('4fd97a4705a35677eff35e91') } max: { _id: ObjectId('4fd97a4805a35677eff3607a') } dataWritten: 556119 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4705a35677eff35e91') } -->> { : ObjectId('4fd97a4805a35677eff3607a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff36074') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|165||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff3607a') } max: { _id: ObjectId('4fd97a4805a35677eff3625f') } dataWritten: 677638 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff3607a') } -->> { : ObjectId('4fd97a4805a35677eff3625f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff3625d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|166||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff3625f') } max: { _id: ObjectId('4fd97a4805a35677eff36447') } dataWritten: 659883 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff3625f') } -->> { : ObjectId('4fd97a4805a35677eff36447') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff36442') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|167||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff36447') } max: { _id: ObjectId('4fd97a4805a35677eff3662c') } dataWritten: 607408 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff36447') } -->> { : ObjectId('4fd97a4805a35677eff3662c') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff3662a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|168||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff3662c') } max: { _id: ObjectId('4fd97a4805a35677eff36814') } dataWritten: 707604 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff3662c') } -->> { : ObjectId('4fd97a4805a35677eff36814') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff3680f') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|169||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff36814') } max: { _id: ObjectId('4fd97a4805a35677eff369f9') } dataWritten: 571667 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff36814') } -->> { : ObjectId('4fd97a4805a35677eff369f9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff369f7') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|170||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff369f9') } max: { _id: ObjectId('4fd97a4805a35677eff36be0') } dataWritten: 556856 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff369f9') } -->> { : ObjectId('4fd97a4805a35677eff36be0') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff36bdc') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|171||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff36be0') } max: { _id: ObjectId('4fd97a4805a35677eff36dca') } dataWritten: 563399 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff36be0') } -->> { : ObjectId('4fd97a4805a35677eff36dca') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff36dc3') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|172||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff36dca') } max: { _id: ObjectId('4fd97a4805a35677eff36faf') } dataWritten: 552560 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff36dca') } -->> { : ObjectId('4fd97a4805a35677eff36faf') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff36fad') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|173||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff36faf') } max: { _id: ObjectId('4fd97a4805a35677eff37195') } dataWritten: 684452 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff36faf') } -->> { : ObjectId('4fd97a4805a35677eff37195') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff37192') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff37195') } max: { _id: ObjectId('4fd97a4805a35677eff3737a') } dataWritten: 655404 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff37195') } -->> { : ObjectId('4fd97a4805a35677eff3737a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff37378') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|175||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff3737a') } max: { _id: ObjectId('4fd97a4805a35677eff37560') } dataWritten: 701634 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff3737a') } -->> { : ObjectId('4fd97a4805a35677eff37560') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4805a35677eff3755d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|176||000000000000000000000000 min: { _id: ObjectId('4fd97a4805a35677eff37560') } max: { _id: ObjectId('4fd97a4905a35677eff37747') } dataWritten: 658844 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4805a35677eff37560') } -->> { : ObjectId('4fd97a4905a35677eff37747') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff37743') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|177||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff37747') } max: { _id: ObjectId('4fd97a4905a35677eff3792f') } dataWritten: 733238 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff37747') } -->> { : ObjectId('4fd97a4905a35677eff3792f') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff3792a') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff3792f') } max: { _id: ObjectId('4fd97a4905a35677eff37b15') } dataWritten: 687688 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff3792f') } -->> { : ObjectId('4fd97a4905a35677eff37b15') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff37b12') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|179||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff37b15') } max: { _id: ObjectId('4fd97a4905a35677eff37cff') } dataWritten: 663306 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff37b15') } -->> { : ObjectId('4fd97a4905a35677eff37cff') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff37cf8') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|180||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff37cff') } max: { _id: ObjectId('4fd97a4905a35677eff37ee8') } dataWritten: 662501 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff37cff') } -->> { : ObjectId('4fd97a4905a35677eff37ee8') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff37ee2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|181||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff37ee8') } max: { _id: ObjectId('4fd97a4905a35677eff380d0') } dataWritten: 640532 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff37ee8') } -->> { : ObjectId('4fd97a4905a35677eff380d0') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff380cb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|182||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff380d0') } max: { _id: ObjectId('4fd97a4905a35677eff382b9') } dataWritten: 678769 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff380d0') } -->> { : ObjectId('4fd97a4905a35677eff382b9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff382b3') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|183||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff382b9') } max: { _id: ObjectId('4fd97a4905a35677eff3849e') } dataWritten: 615692 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff382b9') } -->> { : ObjectId('4fd97a4905a35677eff3849e') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff3849c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|184||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff3849e') } max: { _id: ObjectId('4fd97a4905a35677eff38684') } dataWritten: 572053 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff3849e') } -->> { : ObjectId('4fd97a4905a35677eff38684') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff38681') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|185||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff38684') } max: { _id: ObjectId('4fd97a4905a35677eff38869') } dataWritten: 647140 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff38684') } -->> { : ObjectId('4fd97a4905a35677eff38869') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff38867') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|186||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff38869') } max: { _id: ObjectId('4fd97a4905a35677eff38a4e') } dataWritten: 543487 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff38869') } -->> { : ObjectId('4fd97a4905a35677eff38a4e') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff38a4c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|187||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff38a4e') } max: { _id: ObjectId('4fd97a4905a35677eff38c32') } dataWritten: 534363 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff38a4e') } -->> { : ObjectId('4fd97a4905a35677eff38c32') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff38c31') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|188||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff38c32') } max: { _id: ObjectId('4fd97a4905a35677eff38e1d') } dataWritten: 602700 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff38c32') } -->> { : ObjectId('4fd97a4905a35677eff38e1d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff38e15') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|189||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff38e1d') } max: { _id: ObjectId('4fd97a4905a35677eff39001') } dataWritten: 611719 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff38e1d') } -->> { : ObjectId('4fd97a4905a35677eff39001') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff39000') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|190||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff39001') } max: { _id: ObjectId('4fd97a4905a35677eff391e8') } dataWritten: 689716 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff39001') } -->> { : ObjectId('4fd97a4905a35677eff391e8') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff391e4') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|191||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff391e8') } max: { _id: ObjectId('4fd97a4905a35677eff393cf') } dataWritten: 580251 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff391e8') } -->> { : ObjectId('4fd97a4905a35677eff393cf') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff393cb') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|192||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff393cf') } max: { _id: ObjectId('4fd97a4905a35677eff395b6') } dataWritten: 602408 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff393cf') } -->> { : ObjectId('4fd97a4905a35677eff395b6') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff395b2') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|193||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff395b6') } max: { _id: ObjectId('4fd97a4905a35677eff3979b') } dataWritten: 617296 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff395b6') } -->> { : ObjectId('4fd97a4905a35677eff3979b') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4905a35677eff39799') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|194||000000000000000000000000 min: { _id: ObjectId('4fd97a4905a35677eff3979b') } max: { _id: ObjectId('4fd97a4a05a35677eff39985') } dataWritten: 536051 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4905a35677eff3979b') } -->> { : ObjectId('4fd97a4a05a35677eff39985') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3997e') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|195||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff39985') } max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') } dataWritten: 627756 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff39985') } -->> { : ObjectId('4fd97a4a05a35677eff39b6a') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff39b68') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|196||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') } max: { _id: ObjectId('4fd97a4a05a35677eff39d51') } dataWritten: 563096 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff39b6a') } -->> { : ObjectId('4fd97a4a05a35677eff39d51') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff39d4d') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|197||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff39d51') } max: { _id: ObjectId('4fd97a4a05a35677eff39f36') } dataWritten: 660953 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff39d51') } -->> { : ObjectId('4fd97a4a05a35677eff39f36') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff39f34') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|198||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff39f36') } max: { _id: ObjectId('4fd97a4a05a35677eff3a121') } dataWritten: 717365 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff39f36') } -->> { : ObjectId('4fd97a4a05a35677eff3a121') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3a119') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|199||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3a121') } max: { _id: ObjectId('4fd97a4a05a35677eff3a306') } dataWritten: 531296 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3a121') } -->> { : ObjectId('4fd97a4a05a35677eff3a306') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3a304') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|200||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3a306') } max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') } dataWritten: 708450 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3a306') } -->> { : ObjectId('4fd97a4a05a35677eff3a4ed') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3a4e9') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|201||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') } max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') } dataWritten: 530606 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3a4ed') } -->> { : ObjectId('4fd97a4a05a35677eff3a6d3') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3a6d0') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|202||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') } max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') } dataWritten: 564038 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3a6d3') } -->> { : ObjectId('4fd97a4a05a35677eff3a8b9') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3a8b6') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|203||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') } max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') } dataWritten: 731434 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3a8b9') } -->> { : ObjectId('4fd97a4a05a35677eff3aa9d') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3aa9c') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0001:localhost:30001 lastmod: 1|204||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') } max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') } dataWritten: 690773 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:15 [conn2] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3aa9d') } -->> { : ObjectId('4fd97a4a05a35677eff3ac84') }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split { _id: ObjectId('4fd97a4a05a35677eff3ac80') }
m30999| Thu Jun 14 01:45:15 [conn] about to initiate autosplit: ns:test.mrShardedOut at: shard0000:localhost:30000 lastmod: 1|205||000000000000000000000000 min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') } max: { _id: MaxKey } dataWritten: 444365 splitThreshold: 943718
m30000| Thu Jun 14 01:45:15 [conn11] request split points lookup for chunk test.mrShardedOut { : ObjectId('4fd97a4a05a35677eff3ac84') } -->> { : MaxKey }
m30999| Thu Jun 14 01:45:15 [conn] chunk not full enough to trigger auto-split no split entry
m30000| Thu Jun 14 01:45:15 [conn11] CMD: drop test.tmp.mrs.foo_1339652687_0
m30001| Thu Jun 14 01:45:15 [conn2] CMD: drop test.tmp.mrs.foo_1339652687_0
{
"result" : "mrShardedOut",
"counts" : {
"input" : NumberLong(100000),
"emit" : NumberLong(100000),
"reduce" : NumberLong(0),
"output" : NumberLong(100000)
},
"timeMillis" : 28667,
"timing" : {
"shardProcessing" : 20954,
"postProcessing" : 7713
},
"shardCounts" : {
"localhost:30000" : {
"input" : 529,
"emit" : 529,
"reduce" : 0,
"output" : 529
},
"localhost:30001" : {
"input" : 99471,
"emit" : 99471,
"reduce" : 0,
"output" : 99471
}
},
"postProcessCounts" : {
"localhost:30000" : {
"input" : NumberLong(49872),
"reduce" : NumberLong(0),
"output" : NumberLong(49872)
},
"localhost:30001" : {
"input" : NumberLong(50128),
"reduce" : NumberLong(0),
"output" : NumberLong(50128)
}
},
"ok" : 1,
}
m30000| Thu Jun 14 01:45:16 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.3, size: 128MB, took 3.427 secs
Number of chunks: 206
{
"_id" : "test.mrShardedOut-_id_MinKey",
"lastmod" : Timestamp(1000, 0),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : { $minKey : 1 }
},
"max" : {
"_id" : ObjectId("4fd97a3c05a35677eff228c8")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')",
"lastmod" : Timestamp(1000, 1),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3c05a35677eff228c8")
},
"max" : {
"_id" : ObjectId("4fd97a3c05a35677eff22aac")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')",
"lastmod" : Timestamp(1000, 2),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3c05a35677eff22aac")
},
"max" : {
"_id" : ObjectId("4fd97a3c05a35677eff22c95")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')",
"lastmod" : Timestamp(1000, 3),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3c05a35677eff22c95")
},
"max" : {
"_id" : ObjectId("4fd97a3c05a35677eff22e7b")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')",
"lastmod" : Timestamp(1000, 4),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3c05a35677eff22e7b")
},
"max" : {
"_id" : ObjectId("4fd97a3c05a35677eff2305f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')",
"lastmod" : Timestamp(1000, 5),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3c05a35677eff2305f")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff23246")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')",
"lastmod" : Timestamp(1000, 6),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff23246")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff2342c")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')",
"lastmod" : Timestamp(1000, 7),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff2342c")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff23611")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')",
"lastmod" : Timestamp(1000, 8),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff23611")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff237f5")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')",
"lastmod" : Timestamp(1000, 9),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff237f5")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff239dc")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')",
"lastmod" : Timestamp(1000, 10),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff239dc")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff23bc4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')",
"lastmod" : Timestamp(1000, 11),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff23bc4")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff23da9")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')",
"lastmod" : Timestamp(1000, 12),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff23da9")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff23f8f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')",
"lastmod" : Timestamp(1000, 13),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff23f8f")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24176")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')",
"lastmod" : Timestamp(1000, 14),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24176")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff2435d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')",
"lastmod" : Timestamp(1000, 15),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff2435d")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24541")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')",
"lastmod" : Timestamp(1000, 16),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24541")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24727")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')",
"lastmod" : Timestamp(1000, 17),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24727")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff2490f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')",
"lastmod" : Timestamp(1000, 18),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff2490f")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24af4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')",
"lastmod" : Timestamp(1000, 19),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24af4")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24cde")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')",
"lastmod" : Timestamp(1000, 20),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24cde")
},
"max" : {
"_id" : ObjectId("4fd97a3d05a35677eff24ec4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')",
"lastmod" : Timestamp(1000, 21),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3d05a35677eff24ec4")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff250ad")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')",
"lastmod" : Timestamp(1000, 22),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff250ad")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25295")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')",
"lastmod" : Timestamp(1000, 23),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25295")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff2547d")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')",
"lastmod" : Timestamp(1000, 24),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff2547d")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25663")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')",
"lastmod" : Timestamp(1000, 25),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25663")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff2584a")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')",
"lastmod" : Timestamp(1000, 26),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff2584a")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25a31")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')",
"lastmod" : Timestamp(1000, 27),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25a31")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25c16")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')",
"lastmod" : Timestamp(1000, 28),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25c16")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25e01")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')",
"lastmod" : Timestamp(1000, 29),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25e01")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff25fe8")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')",
"lastmod" : Timestamp(1000, 30),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff25fe8")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff261d0")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')",
"lastmod" : Timestamp(1000, 31),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff261d0")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff263b4")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')",
"lastmod" : Timestamp(1000, 32),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff263b4")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff26598")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')",
"lastmod" : Timestamp(1000, 33),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff26598")
},
"max" : {
"_id" : ObjectId("4fd97a3e05a35677eff2677e")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')",
"lastmod" : Timestamp(1000, 34),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3e05a35677eff2677e")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff26964")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')",
"lastmod" : Timestamp(1000, 35),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff26964")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff26b4c")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')",
"lastmod" : Timestamp(1000, 36),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff26b4c")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff26d35")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')",
"lastmod" : Timestamp(1000, 37),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff26d35")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff26f1f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')",
"lastmod" : Timestamp(1000, 38),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff26f1f")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff27105")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')",
"lastmod" : Timestamp(1000, 39),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff27105")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff272ec")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')",
"lastmod" : Timestamp(1000, 40),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff272ec")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff274d5")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')",
"lastmod" : Timestamp(1000, 41),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff274d5")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff276ba")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')",
"lastmod" : Timestamp(1000, 42),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff276ba")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff278a1")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')",
"lastmod" : Timestamp(1000, 43),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff278a1")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff27a87")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')",
"lastmod" : Timestamp(1000, 44),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff27a87")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff27c6f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')",
"lastmod" : Timestamp(1000, 45),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff27c6f")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff27e57")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')",
"lastmod" : Timestamp(1000, 46),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff27e57")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff2803f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')",
"lastmod" : Timestamp(1000, 47),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff2803f")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff28226")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')",
"lastmod" : Timestamp(1000, 48),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff28226")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff2840d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')",
"lastmod" : Timestamp(1000, 49),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff2840d")
},
"max" : {
"_id" : ObjectId("4fd97a3f05a35677eff285f3")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')",
"lastmod" : Timestamp(1000, 50),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a3f05a35677eff285f3")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff287d7")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')",
"lastmod" : Timestamp(1000, 51),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff287d7")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff289bf")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')",
"lastmod" : Timestamp(1000, 52),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff289bf")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff28ba4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')",
"lastmod" : Timestamp(1000, 53),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff28ba4")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff28d8b")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')",
"lastmod" : Timestamp(1000, 54),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff28d8b")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff28f71")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')",
"lastmod" : Timestamp(1000, 55),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff28f71")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29159")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')",
"lastmod" : Timestamp(1000, 56),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29159")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff2933f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')",
"lastmod" : Timestamp(1000, 57),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff2933f")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29523")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')",
"lastmod" : Timestamp(1000, 58),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29523")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29708")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')",
"lastmod" : Timestamp(1000, 59),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29708")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff298ed")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')",
"lastmod" : Timestamp(1000, 60),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff298ed")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29ad4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')",
"lastmod" : Timestamp(1000, 61),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29ad4")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29cba")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')",
"lastmod" : Timestamp(1000, 62),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29cba")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff29e9f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')",
"lastmod" : Timestamp(1000, 63),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff29e9f")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff2a086")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')",
"lastmod" : Timestamp(1000, 64),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff2a086")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff2a26b")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')",
"lastmod" : Timestamp(1000, 65),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff2a26b")
},
"max" : {
"_id" : ObjectId("4fd97a4005a35677eff2a450")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')",
"lastmod" : Timestamp(1000, 66),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4005a35677eff2a450")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2a636")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')",
"lastmod" : Timestamp(1000, 67),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2a636")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2a81d")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')",
"lastmod" : Timestamp(1000, 68),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2a81d")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2aa03")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')",
"lastmod" : Timestamp(1000, 69),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2aa03")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2abea")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')",
"lastmod" : Timestamp(1000, 70),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2abea")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2add0")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')",
"lastmod" : Timestamp(1000, 71),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2add0")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2afb8")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')",
"lastmod" : Timestamp(1000, 72),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2afb8")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2b1a0")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')",
"lastmod" : Timestamp(1000, 73),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2b1a0")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2b387")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')",
"lastmod" : Timestamp(1000, 74),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2b387")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2b56f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')",
"lastmod" : Timestamp(1000, 75),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2b56f")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2b757")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')",
"lastmod" : Timestamp(1000, 76),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2b757")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2b93b")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')",
"lastmod" : Timestamp(1000, 77),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2b93b")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2bb23")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')",
"lastmod" : Timestamp(1000, 78),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2bb23")
},
"max" : {
"_id" : ObjectId("4fd97a4105a35677eff2bd07")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')",
"lastmod" : Timestamp(1000, 79),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4105a35677eff2bd07")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2beee")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')",
"lastmod" : Timestamp(1000, 80),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2beee")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2c0d4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')",
"lastmod" : Timestamp(1000, 81),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2c0d4")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2c2bb")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')",
"lastmod" : Timestamp(1000, 82),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2c2bb")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2c4a2")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')",
"lastmod" : Timestamp(1000, 83),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2c4a2")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2c687")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')",
"lastmod" : Timestamp(1000, 84),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2c687")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2c86f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')",
"lastmod" : Timestamp(1000, 85),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2c86f")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2ca54")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')",
"lastmod" : Timestamp(1000, 86),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2ca54")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2cc39")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')",
"lastmod" : Timestamp(1000, 87),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2cc39")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2ce20")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')",
"lastmod" : Timestamp(1000, 88),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2ce20")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2d008")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')",
"lastmod" : Timestamp(1000, 89),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2d008")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2d1ef")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')",
"lastmod" : Timestamp(1000, 90),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2d1ef")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2d3d5")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')",
"lastmod" : Timestamp(1000, 91),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2d3d5")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2d5bc")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')",
"lastmod" : Timestamp(1000, 92),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2d5bc")
},
"max" : {
"_id" : ObjectId("4fd97a4205a35677eff2d7a1")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')",
"lastmod" : Timestamp(1000, 93),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4205a35677eff2d7a1")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2d986")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')",
"lastmod" : Timestamp(1000, 94),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2d986")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2db6f")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')",
"lastmod" : Timestamp(1000, 95),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2db6f")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2dd54")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')",
"lastmod" : Timestamp(1000, 96),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2dd54")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2df3e")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')",
"lastmod" : Timestamp(1000, 97),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2df3e")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2e127")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')",
"lastmod" : Timestamp(1000, 98),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2e127")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2e30d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')",
"lastmod" : Timestamp(1000, 99),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2e30d")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2e4f2")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')",
"lastmod" : Timestamp(1000, 100),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2e4f2")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2e6d8")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')",
"lastmod" : Timestamp(1000, 101),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2e6d8")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2e8bf")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')",
"lastmod" : Timestamp(1000, 102),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2e8bf")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2eaa5")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')",
"lastmod" : Timestamp(1000, 103),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2eaa5")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2ec89")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')",
"lastmod" : Timestamp(1000, 104),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2ec89")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2ee6d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')",
"lastmod" : Timestamp(1000, 105),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2ee6d")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f052")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')",
"lastmod" : Timestamp(1000, 106),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f052")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f239")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')",
"lastmod" : Timestamp(1000, 107),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f239")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f41f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')",
"lastmod" : Timestamp(1000, 108),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f41f")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f603")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')",
"lastmod" : Timestamp(1000, 109),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f603")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f7e7")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')",
"lastmod" : Timestamp(1000, 110),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f7e7")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2f9cd")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')",
"lastmod" : Timestamp(1000, 111),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2f9cd")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2fbb4")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')",
"lastmod" : Timestamp(1000, 112),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2fbb4")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2fd9a")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')",
"lastmod" : Timestamp(1000, 113),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2fd9a")
},
"max" : {
"_id" : ObjectId("4fd97a4305a35677eff2ff82")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')",
"lastmod" : Timestamp(1000, 114),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4305a35677eff2ff82")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff3016a")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')",
"lastmod" : Timestamp(1000, 115),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff3016a")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30351")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')",
"lastmod" : Timestamp(1000, 116),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30351")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30537")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')",
"lastmod" : Timestamp(1000, 117),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30537")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30721")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')",
"lastmod" : Timestamp(1000, 118),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30721")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30907")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')",
"lastmod" : Timestamp(1000, 119),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30907")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30aef")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')",
"lastmod" : Timestamp(1000, 120),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30aef")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30cd5")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')",
"lastmod" : Timestamp(1000, 121),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30cd5")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff30ebc")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')",
"lastmod" : Timestamp(1000, 122),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff30ebc")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff310a7")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')",
"lastmod" : Timestamp(1000, 123),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff310a7")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff3128e")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')",
"lastmod" : Timestamp(1000, 124),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff3128e")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31473")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')",
"lastmod" : Timestamp(1000, 125),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31473")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff3165b")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')",
"lastmod" : Timestamp(1000, 126),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff3165b")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31841")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')",
"lastmod" : Timestamp(1000, 127),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31841")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31a28")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')",
"lastmod" : Timestamp(1000, 128),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31a28")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31c0d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')",
"lastmod" : Timestamp(1000, 129),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31c0d")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31df3")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')",
"lastmod" : Timestamp(1000, 130),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31df3")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff31fda")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')",
"lastmod" : Timestamp(1000, 131),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff31fda")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff321bf")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')",
"lastmod" : Timestamp(1000, 132),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff321bf")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff323a4")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')",
"lastmod" : Timestamp(1000, 133),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff323a4")
},
"max" : {
"_id" : ObjectId("4fd97a4405a35677eff3258c")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')",
"lastmod" : Timestamp(1000, 134),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4405a35677eff3258c")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff32774")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')",
"lastmod" : Timestamp(1000, 135),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff32774")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff32958")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')",
"lastmod" : Timestamp(1000, 136),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff32958")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff32b3d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')",
"lastmod" : Timestamp(1000, 137),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff32b3d")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff32d23")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')",
"lastmod" : Timestamp(1000, 138),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff32d23")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff32f0c")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')",
"lastmod" : Timestamp(1000, 139),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff32f0c")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff330f5")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')",
"lastmod" : Timestamp(1000, 140),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff330f5")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff332d9")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')",
"lastmod" : Timestamp(1000, 141),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff332d9")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff334c2")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')",
"lastmod" : Timestamp(1000, 142),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff334c2")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff336ab")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')",
"lastmod" : Timestamp(1000, 143),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff336ab")
},
"max" : {
"_id" : ObjectId("4fd97a4505a35677eff33891")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')",
"lastmod" : Timestamp(1000, 144),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4505a35677eff33891")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff33a77")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')",
"lastmod" : Timestamp(1000, 145),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff33a77")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff33c5c")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')",
"lastmod" : Timestamp(1000, 146),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff33c5c")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff33e41")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')",
"lastmod" : Timestamp(1000, 147),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff33e41")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff34026")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')",
"lastmod" : Timestamp(1000, 148),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff34026")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff3420d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')",
"lastmod" : Timestamp(1000, 149),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff3420d")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff343f3")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')",
"lastmod" : Timestamp(1000, 150),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff343f3")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff345d9")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')",
"lastmod" : Timestamp(1000, 151),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff345d9")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff347c1")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')",
"lastmod" : Timestamp(1000, 152),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff347c1")
},
"max" : {
"_id" : ObjectId("4fd97a4605a35677eff349a9")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')",
"lastmod" : Timestamp(1000, 153),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4605a35677eff349a9")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff34b90")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')",
"lastmod" : Timestamp(1000, 154),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff34b90")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff34d79")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')",
"lastmod" : Timestamp(1000, 155),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff34d79")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff34f5f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')",
"lastmod" : Timestamp(1000, 156),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff34f5f")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff35147")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')",
"lastmod" : Timestamp(1000, 157),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff35147")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff3532c")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')",
"lastmod" : Timestamp(1000, 158),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff3532c")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff35511")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')",
"lastmod" : Timestamp(1000, 159),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff35511")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff356fa")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')",
"lastmod" : Timestamp(1000, 160),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff356fa")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff358e1")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')",
"lastmod" : Timestamp(1000, 161),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff358e1")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff35ac6")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')",
"lastmod" : Timestamp(1000, 162),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff35ac6")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff35cab")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')",
"lastmod" : Timestamp(1000, 163),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff35cab")
},
"max" : {
"_id" : ObjectId("4fd97a4705a35677eff35e91")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')",
"lastmod" : Timestamp(1000, 164),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4705a35677eff35e91")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff3607a")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')",
"lastmod" : Timestamp(1000, 165),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff3607a")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff3625f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')",
"lastmod" : Timestamp(1000, 166),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff3625f")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff36447")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')",
"lastmod" : Timestamp(1000, 167),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff36447")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff3662c")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')",
"lastmod" : Timestamp(1000, 168),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff3662c")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff36814")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')",
"lastmod" : Timestamp(1000, 169),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff36814")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff369f9")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')",
"lastmod" : Timestamp(1000, 170),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff369f9")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff36be0")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')",
"lastmod" : Timestamp(1000, 171),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff36be0")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff36dca")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')",
"lastmod" : Timestamp(1000, 172),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff36dca")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff36faf")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')",
"lastmod" : Timestamp(1000, 173),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff36faf")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff37195")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')",
"lastmod" : Timestamp(1000, 174),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff37195")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff3737a")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')",
"lastmod" : Timestamp(1000, 175),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff3737a")
},
"max" : {
"_id" : ObjectId("4fd97a4805a35677eff37560")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')",
"lastmod" : Timestamp(1000, 176),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4805a35677eff37560")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff37747")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')",
"lastmod" : Timestamp(1000, 177),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff37747")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff3792f")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')",
"lastmod" : Timestamp(1000, 178),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff3792f")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff37b15")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')",
"lastmod" : Timestamp(1000, 179),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff37b15")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff37cff")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')",
"lastmod" : Timestamp(1000, 180),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff37cff")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff37ee8")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')",
"lastmod" : Timestamp(1000, 181),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff37ee8")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff380d0")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')",
"lastmod" : Timestamp(1000, 182),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff380d0")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff382b9")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')",
"lastmod" : Timestamp(1000, 183),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff382b9")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff3849e")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')",
"lastmod" : Timestamp(1000, 184),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff3849e")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff38684")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')",
"lastmod" : Timestamp(1000, 185),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff38684")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff38869")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')",
"lastmod" : Timestamp(1000, 186),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff38869")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff38a4e")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')",
"lastmod" : Timestamp(1000, 187),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff38a4e")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff38c32")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')",
"lastmod" : Timestamp(1000, 188),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff38c32")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff38e1d")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')",
"lastmod" : Timestamp(1000, 189),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff38e1d")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff39001")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')",
"lastmod" : Timestamp(1000, 190),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff39001")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff391e8")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')",
"lastmod" : Timestamp(1000, 191),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff391e8")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff393cf")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')",
"lastmod" : Timestamp(1000, 192),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff393cf")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff395b6")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')",
"lastmod" : Timestamp(1000, 193),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff395b6")
},
"max" : {
"_id" : ObjectId("4fd97a4905a35677eff3979b")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')",
"lastmod" : Timestamp(1000, 194),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4905a35677eff3979b")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff39985")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')",
"lastmod" : Timestamp(1000, 195),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff39985")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff39b6a")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')",
"lastmod" : Timestamp(1000, 196),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff39b6a")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff39d51")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')",
"lastmod" : Timestamp(1000, 197),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff39d51")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff39f36")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')",
"lastmod" : Timestamp(1000, 198),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff39f36")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a121")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')",
"lastmod" : Timestamp(1000, 199),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a121")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a306")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')",
"lastmod" : Timestamp(1000, 200),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a306")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a4ed")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')",
"lastmod" : Timestamp(1000, 201),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a4ed")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a6d3")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')",
"lastmod" : Timestamp(1000, 202),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a6d3")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a8b9")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')",
"lastmod" : Timestamp(1000, 203),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3a8b9")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3aa9d")
},
"shard" : "shard0000"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')",
"lastmod" : Timestamp(1000, 204),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3aa9d")
},
"max" : {
"_id" : ObjectId("4fd97a4a05a35677eff3ac84")
},
"shard" : "shard0001"
}
{
"_id" : "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')",
"lastmod" : Timestamp(1000, 205),
"lastmodEpoch" : ObjectId("4fd97a640d2fef4d6a507be7"),
"ns" : "test.mrShardedOut",
"min" : {
"_id" : ObjectId("4fd97a4a05a35677eff3ac84")
},
"max" : {
"_id" : { $maxKey : 1 }
},
"shard" : "shard0000"
}
NUMBER OF CHUNKS FOR SHARD shard0001: 103
NUMBER OF CHUNKS FOR SHARD shard0000: 103
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 898.6566515076229 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 615.3266278873516 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 30.85678137192671 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:18 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 526.919018850918 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:18 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 531.7597013546634 } ], shardId: "test.foo-a_526.919018850918", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:18 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:18 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6e32a28802daeee03b
m30001| Thu Jun 14 01:45:18 [conn2] splitChunk accepted at version 4|1||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:18 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:18-138", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652718855), what: "split", ns: "test.foo", details: { before: { min: { a: 526.919018850918 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 531.7597013546634 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:18 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 225.5962198744838 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 898.6566515076229 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 369.0981926515277 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 848.2332478721062 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:45:18 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 123.1918419151289 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:45:18 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, from: "shard0001", splitKeys: [ { a: 127.4590140914801 } ], shardId: "test.foo-a_123.1918419151289", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:18 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:18 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6e32a28802daeee03c
m30001| Thu Jun 14 01:45:18 [conn2] splitChunk accepted at version 4|3||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:18 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:18-139", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652718948), what: "split", ns: "test.foo", details: { before: { min: { a: 123.1918419151289 }, max: { a: 136.5735165062921 }, lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 127.4590140914801 }, max: { a: 136.5735165062921 }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:18 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:18 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 623.3985075048967 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 628.1995001147562 } ], shardId: "test.foo-a_623.3985075048967", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee03d
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|5||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-140", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719012), what: "split", ns: "test.foo", details: { before: { min: { a: 623.3985075048967 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 628.1995001147562 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 146.6503611644078 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 136.5735165062921 } -->> { : 146.6503611644078 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, from: "shard0001", splitKeys: [ { a: 141.1884883168546 } ], shardId: "test.foo-a_136.5735165062921", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee03e
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|7||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-141", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719039), what: "split", ns: "test.foo", details: { before: { min: { a: 136.5735165062921 }, max: { a: 146.6503611644078 }, lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 30.85678137192671 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 254.1395685736485 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 490.1028421929578 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 797.6352444405507 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, from: "shard0001", splitKeys: [ { a: 802.4966878498034 } ], shardId: "test.foo-a_797.6352444405507", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee03f
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|9||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-142", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719183), what: "split", ns: "test.foo", details: { before: { min: { a: 797.6352444405507 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 802.4966878498034 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 848.2332478721062 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 240.0709323500288 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 703.7520953686671 } -->> { : 708.8986861220777 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 599.2155367136296 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, from: "shard0001", splitKeys: [ { a: 603.53104016638 } ], shardId: "test.foo-a_599.2155367136296", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee040
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|11||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-143", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719227), what: "split", ns: "test.foo", details: { before: { min: { a: 599.2155367136296 }, max: { a: 610.6068178358934 }, lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 694.6501944983177 } -->> { : 703.7520953686671 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 422.4151431966537 } -->> { : 427.2300955074828 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 628.1995001147562 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 628.1995001147562 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 628.1995001147562 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 632.4786347534061 } ], shardId: "test.foo-a_628.1995001147562", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee041
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|13||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-144", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719334), what: "split", ns: "test.foo", details: { before: { min: { a: 628.1995001147562 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 632.4786347534061 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } dataWritten: 209929 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 342.3643570818544 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 57.56464668319472 } dataWritten: 209786 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 53.2232318576531 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 898.6566515076229 } dataWritten: 210761 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 897.1569980962992 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|63||000000000000000000000000 min: { a: 615.3266278873516 } max: { a: 623.3985075048967 } dataWritten: 210169 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 620.0199759183627 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|47||000000000000000000000000 min: { a: 30.85678137192671 } max: { a: 39.89992532263464 } dataWritten: 209953 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 35.83777966453938 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } dataWritten: 210651 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 164.1364058850075 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } dataWritten: 210209 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 495.0813296458895 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|120||000000000000000000000000 min: { a: 526.919018850918 } max: { a: 542.4296058071777 } dataWritten: 210484 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 136 version: 4|3||4fd97a3b0d2fef4d6a507be2 based on: 4|1||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:18 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|120||000000000000000000000000 min: { a: 526.919018850918 } max: { a: 542.4296058071777 } on: { a: 531.7597013546634 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|3, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|161||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 225.5962198744838 } dataWritten: 210298 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 221.8517382879306 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } dataWritten: 210004 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 397.9451618711997 }
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 898.6566515076229 } dataWritten: 210382 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 897.027358408548 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 294.0222214358918 } dataWritten: 210348 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 289.9475003331963 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|38||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 369.0981926515277 } dataWritten: 210203 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 368.1013863086713 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|176||000000000000000000000000 min: { a: 848.2332478721062 } max: { a: 855.8703567421647 } dataWritten: 210236 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 853.0488112150679 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|95||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 136.5735165062921 } dataWritten: 209856 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 137 version: 4|5||4fd97a3b0d2fef4d6a507be2 based on: 4|3||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:18 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|95||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 136.5735165062921 } on: { a: 127.4590140914801 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|5, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:18 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210052 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 193.4725511855384 }
m30999| Thu Jun 14 01:45:18 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 209761 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:18 [conn] chunk not full enough to trigger auto-split { a: 208.8196408687364 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } dataWritten: 209818 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 138 version: 4|7||4fd97a3b0d2fef4d6a507be2 based on: 4|5||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|90||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 640.7093733209429 } on: { a: 628.1995001147562 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|7, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|123||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 146.6503611644078 } dataWritten: 210407 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 139 version: 4|9||4fd97a3b0d2fef4d6a507be2 based on: 4|7||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|123||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 146.6503611644078 } on: { a: 141.1884883168546 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|9, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } dataWritten: 210655 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 653.6767790574108 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|47||000000000000000000000000 min: { a: 30.85678137192671 } max: { a: 39.89992532263464 } dataWritten: 210364 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 35.63220617105822 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 254.1395685736485 } dataWritten: 210616 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 252.6515736288918 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } dataWritten: 209781 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 576.7595870507829 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|6||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 490.1028421929578 } dataWritten: 209999 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 488.3071833570206 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 209257 splitThreshold: 943718
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } dataWritten: 210651 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 140 version: 4|11||4fd97a3b0d2fef4d6a507be2 based on: 4|9||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|75||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 815.7684070742035 } on: { a: 802.4966878498034 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|11, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|175||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 848.2332478721062 } dataWritten: 210425 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 845.128906829321 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { a: 240.0709323500288 } max: { a: 248.3080159156712 } dataWritten: 210755 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 244.6987614356553 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|64||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 708.8986861220777 } dataWritten: 210656 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 708.5431743886096 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|142||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 610.6068178358934 } dataWritten: 210419 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 141 version: 4|13||4fd97a3b0d2fef4d6a507be2 based on: 4|11||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|142||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 610.6068178358934 } on: { a: 603.53104016638 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|13, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|149||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 703.7520953686671 } dataWritten: 210544 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 699.506598243387 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|67||000000000000000000000000 min: { a: 422.4151431966537 } max: { a: 427.2300955074828 } dataWritten: 210163 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 426.7708117787124 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|49||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 998.3975234740553 } dataWritten: 209911 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 996.1668256759405 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } dataWritten: 210048 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 62.35056994208621 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|7||000000000000000000000000 min: { a: 628.1995001147562 } max: { a: 640.7093733209429 } dataWritten: 210488 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 142 version: 4|15||4fd97a3b0d2fef4d6a507be2 based on: 4|13||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|7||000000000000000000000000 min: { a: 628.1995001147562 } max: { a: 640.7093733209429 } on: { a: 632.4786347534061 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|15, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 269.785248844529 } -->> { : 277.1560315461681 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|41||000000000000000000000000 min: { a: 269.785248844529 } max: { a: 277.1560315461681 } dataWritten: 210169 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 274.559018368476 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 615.3266278873516 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 848.2332478721062 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 526.919018850918 } -->> { : 531.7597013546634 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 199183 splitThreshold: 943718
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 57.56464668319472 } dataWritten: 210730 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 52.98710196256151 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|62||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 615.3266278873516 } dataWritten: 210732 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 614.8582133394423 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|175||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 848.2332478721062 } dataWritten: 210673 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 845.0555663901265 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|2||000000000000000000000000 min: { a: 526.919018850918 } max: { a: 531.7597013546634 } dataWritten: 209912 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 531.4746349892553 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 744.9210849408088 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 933.0462189495814 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 369.0981926515277 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 773.3799848158397 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, from: "shard0001", splitKeys: [ { a: 777.6503149863191 } ], shardId: "test.foo-a_773.3799848158397", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee042
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|15||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-145", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719700), what: "split", ns: "test.foo", details: { before: { min: { a: 773.3799848158397 }, max: { a: 784.2714953599016 }, lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 777.6503149863191 }, max: { a: 784.2714953599016 }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210672 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 832.1754328762224 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { a: 744.9210849408088 } max: { a: 752.6019558395919 } dataWritten: 210384 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 749.5364303777137 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 210443 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 981.7914942976363 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|56||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 933.0462189495814 } dataWritten: 210149 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 932.3308696119237 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 210202 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 298.2645731698299 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|38||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 369.0981926515277 } dataWritten: 210000 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 367.8445599630223 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } dataWritten: 210606 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 214.529317224569 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|113||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 784.2714953599016 } dataWritten: 210581 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 143 version: 4|17||4fd97a3b0d2fef4d6a507be2 based on: 4|15||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|113||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 784.2714953599016 } on: { a: 777.6503149863191 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|17, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|11||000000000000000000000000 min: { a: 802.4966878498034 } max: { a: 815.7684070742035 } dataWritten: 210112 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 802.4966878498034 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 802.4966878498034 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 802.4966878498034 }, max: { a: 815.7684070742035 }, from: "shard0001", splitKeys: [ { a: 807.4105833931693 } ], shardId: "test.foo-a_802.4966878498034", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee043
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|17||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-146", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719888), what: "split", ns: "test.foo", details: { before: { min: { a: 802.4966878498034 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 807.4105833931693 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 4000|19, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 144 version: 4|19||4fd97a3b0d2fef4d6a507be2 based on: 4|17||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|11||000000000000000000000000 min: { a: 802.4966878498034 } max: { a: 815.7684070742035 } on: { a: 807.4105833931693 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|19, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|34||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 910.9608546053483 } dataWritten: 210365 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 910.9608546053483 }
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 909.6447567984426 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 848.2332478721062 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:45:19 [conn2] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:45:19 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 284.9747465988205 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:45:19 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, from: "shard0001", splitKeys: [ { a: 289.7137301985317 } ], shardId: "test.foo-a_284.9747465988205", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:19 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a6f32a28802daeee044
m30001| Thu Jun 14 01:45:19 [conn2] splitChunk accepted at version 4|19||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:19 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:19-147", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652719958), what: "split", ns: "test.foo", details: { before: { min: { a: 284.9747465988205 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:19 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 210741 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 298.2426738577902 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|176||000000000000000000000000 min: { a: 848.2332478721062 } max: { a: 855.8703567421647 } dataWritten: 210722 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] chunk not full enough to trigger auto-split { a: 852.7989324206668 }
m30999| Thu Jun 14 01:45:19 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 294.0222214358918 } dataWritten: 210630 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:19 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 145 version: 4|21||4fd97a3b0d2fef4d6a507be2 based on: 4|19||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:19 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|178||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 294.0222214358918 } on: { a: 289.7137301985317 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|21, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:19 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|23||000000000000000000000000 min: { a: 194.8927257678023 } max: { a: 204.0577089538382 } dataWritten: 209849 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 194.8927257678023 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 199.365502339581 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|56||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 933.0462189495814 } dataWritten: 209789 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 933.0462189495814 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 932.146257623467 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|66||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 422.4151431966537 } dataWritten: 210690 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 417.3437896431063 } -->> { : 422.4151431966537 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 421.9364615432461 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } dataWritten: 209755 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 494.641032574335 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|3||000000000000000000000000 min: { a: 531.7597013546634 } max: { a: 542.4296058071777 } dataWritten: 210342 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 531.7597013546634 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 531.7597013546634 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 531.7597013546634 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 536.0462960134931 } ], shardId: "test.foo-a_531.7597013546634", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee045
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|21||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-148", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720104), what: "split", ns: "test.foo", details: { before: { min: { a: 531.7597013546634 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 536.0462960134931 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 4000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 146 version: 4|23||4fd97a3b0d2fef4d6a507be2 based on: 4|21||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|3||000000000000000000000000 min: { a: 531.7597013546634 } max: { a: 542.4296058071777 } on: { a: 536.0462960134931 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|23, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|164||000000000000000000000000 min: { a: 167.6382092456179 } max: { a: 176.0230312595962 } dataWritten: 210261 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 167.6382092456179 } -->> { : 176.0230312595962 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 172.1555208137998 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|56||000000000000000000000000 min: { a: 927.6813889109981 } max: { a: 933.0462189495814 } dataWritten: 210632 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 927.6813889109981 } -->> { : 933.0462189495814 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 932.06452984311 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|52||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 521.3538677091974 } dataWritten: 209985 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 521.3538677091974 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 520.2793610010488 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 603.53104016638 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 955.9182567868356 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 955.9182567868356 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, from: "shard0001", splitKeys: [ { a: 960.5824651536831 } ], shardId: "test.foo-a_955.9182567868356", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee046
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|23||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-149", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720201), what: "split", ns: "test.foo", details: { before: { min: { a: 955.9182567868356 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 807.4105833931693 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 369.0981926515277 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, from: "shard0001", splitKeys: [ { a: 373.3849373054079 } ], shardId: "test.foo-a_369.0981926515277", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee047
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|25||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-150", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720276), what: "split", ns: "test.foo", details: { before: { min: { a: 369.0981926515277 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 2000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } dataWritten: 210258 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 494.6130326110226 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|12||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 603.53104016638 } dataWritten: 209992 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 603.2977757824266 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|160||000000000000000000000000 min: { a: 955.9182567868356 } max: { a: 964.9150523226922 } dataWritten: 209885 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 147 version: 4|25||4fd97a3b0d2fef4d6a507be2 based on: 4|23||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|160||000000000000000000000000 min: { a: 955.9182567868356 } max: { a: 964.9150523226922 } on: { a: 960.5824651536831 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|25, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210527 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 644.9060795428218 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|19||000000000000000000000000 min: { a: 807.4105833931693 } max: { a: 815.7684070742035 } dataWritten: 210202 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 811.8022459028383 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|39||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 378.3565272980204 } dataWritten: 210351 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 148 version: 4|27||4fd97a3b0d2fef4d6a507be2 based on: 4|25||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|39||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 378.3565272980204 } on: { a: 373.3849373054079 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|27, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 558.0115575910545 } -->> { : 563.897889911273 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|27||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 563.897889911273 } dataWritten: 210065 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 562.4381202873567 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 603.53104016638 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|12||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 603.53104016638 } dataWritten: 210458 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 603.2305672348771 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|180||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 694.6501944983177 } dataWritten: 209977 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 685.0292821001574 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 685.0292821001574 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, from: "shard0001", splitKeys: [ { a: 689.5707127489441 } ], shardId: "test.foo-a_685.0292821001574", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee048
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|27||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-151", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720375), what: "split", ns: "test.foo", details: { before: { min: { a: 685.0292821001574 }, max: { a: 694.6501944983177 }, lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 149 version: 4|29||4fd97a3b0d2fef4d6a507be2 based on: 4|27||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|180||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 694.6501944983177 } on: { a: 689.5707127489441 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|29, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 254.1395685736485 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, from: "shard0001", splitKeys: [ { a: 258.6206493525194 } ], shardId: "test.foo-a_254.1395685736485", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee049
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|29||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-152", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720490), what: "split", ns: "test.foo", details: { before: { min: { a: 254.1395685736485 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 258.6206493525194 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 4000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } dataWritten: 209772 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 18ms sequenceNumber: 150 version: 4|31||4fd97a3b0d2fef4d6a507be2 based on: 4|29||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 264.0825842924789 } on: { a: 258.6206493525194 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|31, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 210246 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 208.4833854828725 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|20||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 289.7137301985317 } dataWritten: 210656 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 289.4998773085622 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 289.7137301985317 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } dataWritten: 209759 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 163.6023458852731 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|59||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 882.331873780809 } dataWritten: 210661 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 873.8718881199745 } -->> { : 882.331873780809 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 878.2875066178657 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|166||000000000000000000000000 min: { a: 74.43717892117874 } max: { a: 83.77384564239721 } dataWritten: 210696 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 74.43717892117874 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 74.43717892117874 } -->> { : 83.77384564239721 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, from: "shard0001", splitKeys: [ { a: 78.73686651492073 } ], shardId: "test.foo-a_74.43717892117874", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04a
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|31||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-153", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720689), what: "split", ns: "test.foo", details: { before: { min: { a: 74.43717892117874 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 151 version: 4|33||4fd97a3b0d2fef4d6a507be2 based on: 4|31||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|166||000000000000000000000000 min: { a: 74.43717892117874 } max: { a: 83.77384564239721 } on: { a: 78.73686651492073 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|33, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } dataWritten: 210073 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 57.56464668319472 } -->> { : 66.37486853611429 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, from: "shard0001", splitKeys: [ { a: 61.76919454003927 } ], shardId: "test.foo-a_57.56464668319472", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04b
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|33||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-154", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720732), what: "split", ns: "test.foo", details: { before: { min: { a: 57.56464668319472 }, max: { a: 66.37486853611429 }, lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 506.5947777056855 } -->> { : 515.6449770586091 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, from: "shard0001", splitKeys: [ { a: 510.639225969218 } ], shardId: "test.foo-a_506.5947777056855", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04c
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|35||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-155", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720744), what: "split", ns: "test.foo", details: { before: { min: { a: 506.5947777056855 }, max: { a: 515.6449770586091 }, lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 685.0292821001574 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 61.76919454003927 } -->> { : 66.37486853611429 }
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 152 version: 4|35||4fd97a3b0d2fef4d6a507be2 based on: 4|33||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|145||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 66.37486853611429 } on: { a: 61.76919454003927 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|35, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } dataWritten: 210136 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 153 version: 4|37||4fd97a3b0d2fef4d6a507be2 based on: 4|35||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|156||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 515.6449770586091 } on: { a: 510.639225969218 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|37, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|179||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 685.0292821001574 } dataWritten: 210670 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 682.2504691674491 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|35||000000000000000000000000 min: { a: 61.76919454003927 } max: { a: 66.37486853611429 } dataWritten: 209841 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 66.06730159722652 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 5.826356493812579 } dataWritten: 210627 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 4.715125166522061 }
m30000| Thu Jun 14 01:45:20 [conn11] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 5.826356493812579 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 207784 splitThreshold: 943718
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split no split entry
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 664.5574284897642 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 664.5574284897642 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, from: "shard0001", splitKeys: [ { a: 668.6362621623331 } ], shardId: "test.foo-a_664.5574284897642", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04d
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|37||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-156", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720842), what: "split", ns: "test.foo", details: { before: { min: { a: 664.5574284897642 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 668.6362621623331 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 4000|39, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 383.7239757530736 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 383.7239757530736 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, from: "shard0001", splitKeys: [ { a: 387.7659705009871 } ], shardId: "test.foo-a_383.7239757530736", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04e
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|39||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-157", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720852), what: "split", ns: "test.foo", details: { before: { min: { a: 383.7239757530736 }, max: { a: 392.8718206829087 }, lastmod: Timestamp 2000|37, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 721.9923962351373 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:45:20 [conn2] request split points lookup for chunk test.foo { : 39.89992532263464 } -->> { : 47.94081917961535 }
m30001| Thu Jun 14 01:45:20 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 39.89992532263464 } -->> { : 47.94081917961535 }
m30001| Thu Jun 14 01:45:20 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, from: "shard0001", splitKeys: [ { a: 43.98990958864879 } ], shardId: "test.foo-a_39.89992532263464", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:20 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7032a28802daeee04f
m30001| Thu Jun 14 01:45:20 [conn2] splitChunk accepted at version 4|41||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:20 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:20-158", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652720888), what: "split", ns: "test.foo", details: { before: { min: { a: 39.89992532263464 }, max: { a: 47.94081917961535 }, lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:20 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|182||000000000000000000000000 min: { a: 664.5574284897642 } max: { a: 678.3563510786536 } dataWritten: 209967 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 154 version: 4|39||4fd97a3b0d2fef4d6a507be2 based on: 4|37||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|182||000000000000000000000000 min: { a: 664.5574284897642 } max: { a: 678.3563510786536 } on: { a: 668.6362621623331 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|39, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|37||000000000000000000000000 min: { a: 383.7239757530736 } max: { a: 392.8718206829087 } dataWritten: 210559 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 155 version: 4|41||4fd97a3b0d2fef4d6a507be2 based on: 4|39||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|37||000000000000000000000000 min: { a: 383.7239757530736 } max: { a: 392.8718206829087 } on: { a: 387.7659705009871 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|41, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|168||000000000000000000000000 min: { a: 721.9923962351373 } max: { a: 729.8361633348899 } dataWritten: 210515 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 725.9910943416982 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|147||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 47.94081917961535 } dataWritten: 210196 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 156 version: 4|43||4fd97a3b0d2fef4d6a507be2 based on: 4|41||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:20 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|147||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 47.94081917961535 } on: { a: 43.98990958864879 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|43, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:20 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:20 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { a: 5.826356493812579 } max: { a: 12.55217658236718 } dataWritten: 209965 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:20 [conn11] request split points lookup for chunk test.foo { : 5.826356493812579 } -->> { : 12.55217658236718 }
m30999| Thu Jun 14 01:45:20 [conn] chunk not full enough to trigger auto-split { a: 10.35367838870227 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 615.3266278873516 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 111.0431509615952 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 101.960589257945 } -->> { : 111.0431509615952 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, from: "shard0001", splitKeys: [ { a: 106.0311910436654 } ], shardId: "test.foo-a_101.960589257945", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee050
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|43||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|63||000000000000000000000000 min: { a: 615.3266278873516 } max: { a: 623.3985075048967 } dataWritten: 210634 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 619.2564572009848 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 209854 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 297.9053914660188 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|135||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 111.0431509615952 } dataWritten: 210026 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-159", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721039), what: "split", ns: "test.foo", details: { before: { min: { a: 101.960589257945 }, max: { a: 111.0431509615952 }, lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 157 version: 4|45||4fd97a3b0d2fef4d6a507be2 based on: 4|43||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|135||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 111.0431509615952 } on: { a: 106.0311910436654 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|45, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|5||000000000000000000000000 min: { a: 127.4590140914801 } max: { a: 136.5735165062921 } dataWritten: 210516 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 127.4590140914801 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 127.4590140914801 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 127.4590140914801 }, max: { a: 136.5735165062921 }, from: "shard0001", splitKeys: [ { a: 131.8115136015859 } ], shardId: "test.foo-a_127.4590140914801", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee051
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|45||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-160", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721052), what: "split", ns: "test.foo", details: { before: { min: { a: 127.4590140914801 }, max: { a: 136.5735165062921 }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 158 version: 4|47||4fd97a3b0d2fef4d6a507be2 based on: 4|45||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|5||000000000000000000000000 min: { a: 127.4590140914801 } max: { a: 136.5735165062921 } on: { a: 131.8115136015859 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|47, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210273 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 192.7692655733219 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|43||000000000000000000000000 min: { a: 43.98990958864879 } max: { a: 47.94081917961535 } dataWritten: 210712 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 43.98990958864879 } -->> { : 47.94081917961535 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 47.60715408819672 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 344.8762285660836 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 344.8762285660836 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, from: "shard0001", splitKeys: [ { a: 349.1094580993942 } ], shardId: "test.foo-a_344.8762285660836", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee052
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|47||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-161", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721107), what: "split", ns: "test.foo", details: { before: { min: { a: 344.8762285660836 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 159.2125242384949 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, from: "shard0001", splitKeys: [ { a: 163.3701742796004 } ], shardId: "test.foo-a_159.2125242384949", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee053
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|49||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-162", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721127), what: "split", ns: "test.foo", details: { before: { min: { a: 159.2125242384949 }, max: { a: 167.6382092456179 }, lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 648.6747268265868 } -->> { : 657.3538695372831 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, from: "shard0001", splitKeys: [ { a: 652.9401841699823 } ], shardId: "test.foo-a_648.6747268265868", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee054
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|51||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-163", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721167), what: "split", ns: "test.foo", details: { before: { min: { a: 648.6747268265868 }, max: { a: 657.3538695372831 }, lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 558.0115575910545 } -->> { : 563.897889911273 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|172||000000000000000000000000 min: { a: 344.8762285660836 } max: { a: 353.2720479801309 } dataWritten: 210106 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 159 version: 4|49||4fd97a3b0d2fef4d6a507be2 based on: 4|47||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|172||000000000000000000000000 min: { a: 344.8762285660836 } max: { a: 353.2720479801309 } on: { a: 349.1094580993942 } (splitThreshold 1048576)
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 777.6503149863191 } -->> { : 784.2714953599016 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|49, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } dataWritten: 210315 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 160 version: 4|51||4fd97a3b0d2fef4d6a507be2 based on: 4|49||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|163||000000000000000000000000 min: { a: 159.2125242384949 } max: { a: 167.6382092456179 } on: { a: 163.3701742796004 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|51, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } dataWritten: 209968 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 161 version: 4|53||4fd97a3b0d2fef4d6a507be2 based on: 4|51||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|174||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 657.3538695372831 } on: { a: 652.9401841699823 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|53, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|27||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 563.897889911273 } dataWritten: 209895 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 562.0426719729961 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|17||000000000000000000000000 min: { a: 777.6503149863191 } max: { a: 784.2714953599016 } dataWritten: 209986 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 781.5195321860239 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } dataWritten: 209806 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 752.6019558395919 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, from: "shard0001", splitKeys: [ { a: 756.637103632288 } ], shardId: "test.foo-a_752.6019558395919", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee055
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|53||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-164", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721203), what: "split", ns: "test.foo", details: { before: { min: { a: 752.6019558395919 }, max: { a: 761.349721153896 }, lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 162 version: 4|55||4fd97a3b0d2fef4d6a507be2 based on: 4|53||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|143||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 761.349721153896 } on: { a: 756.637103632288 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|55, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 910.9608546053483 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 744.9210849408088 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 30.85678137192671 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 30.85678137192671 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, from: "shard0001", splitKeys: [ { a: 34.95140019143683 } ], shardId: "test.foo-a_30.85678137192671", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee056
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|55||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-165", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721247), what: "split", ns: "test.foo", details: { before: { min: { a: 30.85678137192671 }, max: { a: 39.89992532263464 }, lastmod: Timestamp 2000|47, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 891.8750702869381 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 882.331873780809 } -->> { : 891.8750702869381 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, from: "shard0001", splitKeys: [ { a: 886.5207670748756 } ], shardId: "test.foo-a_882.331873780809", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee057
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|57||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-166", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721283), what: "split", ns: "test.foo", details: { before: { min: { a: 882.331873780809 }, max: { a: 891.8750702869381 }, lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 784.2714953599016 } -->> { : 790.298943411581 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 358.3343339611492 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 447.8806134954977 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|34||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 910.9608546053483 } dataWritten: 210148 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 909.4053121839185 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } dataWritten: 209911 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 397.1315522635342 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { a: 744.9210849408088 } max: { a: 752.6019558395919 } dataWritten: 210723 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 749.0153842187196 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|47||000000000000000000000000 min: { a: 30.85678137192671 } max: { a: 39.89992532263464 } dataWritten: 210417 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 163 version: 4|57||4fd97a3b0d2fef4d6a507be2 based on: 4|55||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|47||000000000000000000000000 min: { a: 30.85678137192671 } max: { a: 39.89992532263464 } on: { a: 34.95140019143683 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|57, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 209945 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 192.702203892877 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|133||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 891.8750702869381 } dataWritten: 210589 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 164 version: 4|59||4fd97a3b0d2fef4d6a507be2 based on: 4|57||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|133||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 891.8750702869381 } on: { a: 886.5207670748756 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|59, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|42||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 790.298943411581 } dataWritten: 210543 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 788.5184500880409 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|68||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 358.3343339611492 } dataWritten: 210024 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 357.4935909780339 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 447.8806134954977 } dataWritten: 210390 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 445.2617020031009 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|31||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 840.7121644073931 } dataWritten: 209859 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 833.5963963333859 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 837.7464649883466 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|32||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 181.7281932506388 } dataWritten: 210051 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 181.7281932506388 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 179.8840220106583 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|15||000000000000000000000000 min: { a: 632.4786347534061 } max: { a: 640.7093733209429 } dataWritten: 210748 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 632.4786347534061 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 632.4786347534061 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 632.4786347534061 }, max: { a: 640.7093733209429 }, from: "shard0001", splitKeys: [ { a: 636.2085863336085 } ], shardId: "test.foo-a_632.4786347534061", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee058
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|59||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-167", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721367), what: "split", ns: "test.foo", details: { before: { min: { a: 632.4786347534061 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 165 version: 4|61||4fd97a3b0d2fef4d6a507be2 based on: 4|59||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|15||000000000000000000000000 min: { a: 632.4786347534061 } max: { a: 640.7093733209429 } on: { a: 636.2085863336085 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|61, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 5.826356493812579 } dataWritten: 210327 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:21 [conn11] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 5.826356493812579 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 4.38070623778497 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|58||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 873.8718881199745 } dataWritten: 209984 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 873.8718881199745 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 872.6105215153633 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 240.0709323500288 } dataWritten: 210629 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 240.0709323500288 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 237.9397433737564 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 269.785248844529 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 315.9151551096841 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 337.6965417950217 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 327.5292321238884 } -->> { : 337.6965417950217 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, from: "shard0001", splitKeys: [ { a: 331.4018789379612 } ], shardId: "test.foo-a_327.5292321238884", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee059
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|61||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-168", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721474), what: "split", ns: "test.foo", details: { before: { min: { a: 327.5292321238884 }, max: { a: 337.6965417950217 }, lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 331.4018789379612 }, max: { a: 337.6965417950217 }, lastmod: Timestamp 4000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 603.53104016638 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 189652 splitThreshold: 943718
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|41||000000000000000000000000 min: { a: 269.785248844529 } max: { a: 277.1560315461681 } dataWritten: 210009 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 274.0195285184903 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|24||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 315.9151551096841 } dataWritten: 209823 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 313.5024747847146 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|117||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 337.6965417950217 } dataWritten: 210295 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 166 version: 4|63||4fd97a3b0d2fef4d6a507be2 based on: 4|61||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|117||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 337.6965417950217 } on: { a: 331.4018789379612 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|63, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|13||000000000000000000000000 min: { a: 603.53104016638 } max: { a: 610.6068178358934 } dataWritten: 210505 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 607.3544263234241 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } dataWritten: 210541 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 825.3711864605968 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 193969 splitThreshold: 943718
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|55||000000000000000000000000 min: { a: 756.637103632288 } max: { a: 761.349721153896 } dataWritten: 210029 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 756.637103632288 } -->> { : 761.349721153896 }
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 760.4386156025261 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 417.3437896431063 } -->> { : 422.4151431966537 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 12.55217658236718 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 12.55217658236718 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, from: "shard0001", splitKeys: [ { a: 16.11151483141404 } ], shardId: "test.foo-a_12.55217658236718", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee05a
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|63||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-169", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721572), what: "split", ns: "test.foo", details: { before: { min: { a: 12.55217658236718 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, lastmod: Timestamp 4000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 16.11151483141404 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 898.6566515076229 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 473.1445991105042 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, from: "shard0001", splitKeys: [ { a: 477.2807394020033 } ], shardId: "test.foo-a_473.1445991105042", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee05b
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|65||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-170", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721644), what: "split", ns: "test.foo", details: { before: { min: { a: 473.1445991105042 }, max: { a: 483.6281235892167 }, lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 477.2807394020033 }, max: { a: 483.6281235892167 }, lastmod: Timestamp 4000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 685.0292821001574 } -->> { : 689.5707127489441 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 821.178966084225 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 933.0462189495814 } -->> { : 938.1160661714987 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 708.8986861220777 } -->> { : 714.0536251380356 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 141.1884883168546 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 777.6503149863191 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 284.9747465988205 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 331.4018789379612 } -->> { : 337.6965417950217 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 580.4600029065366 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 580.4600029065366 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, from: "shard0001", splitKeys: [ { a: 584.4225320226172 } ], shardId: "test.foo-a_580.4600029065366", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee05c
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|67||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-171", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721781), what: "split", ns: "test.foo", details: { before: { min: { a: 580.4600029065366 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 584.4225320226172 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 4000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 269.785248844529 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 761.349721153896 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:45:21 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 761.349721153896 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:45:21 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, from: "shard0001", splitKeys: [ { a: 765.2211241548246 } ], shardId: "test.foo-a_761.349721153896", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:21 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7132a28802daeee05d
m30001| Thu Jun 14 01:45:21 [conn2] splitChunk accepted at version 4|69||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:21 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:21-172", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652721797), what: "split", ns: "test.foo", details: { before: { min: { a: 761.349721153896 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 765.2211241548246 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 4000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:21 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 603.53104016638 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 269.785248844529 } -->> { : 277.1560315461681 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 861.9626177544285 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 886.5207670748756 } -->> { : 891.8750702869381 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 943.2489828660326 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 886.5207670748756 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 417.3437896431063 }
m30001| Thu Jun 14 01:45:21 [conn2] request split points lookup for chunk test.foo { : 636.2085863336085 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 349.1094580993942 } -->> { : 353.2720479801309 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 254.1395685736485 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 16.11151483141404 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 16.11151483141404 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 16.11151483141404 }, max: { a: 25.60273139230473 }, from: "shard0001", splitKeys: [ { a: 20.02617482801994 } ], shardId: "test.foo-a_16.11151483141404", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee05e
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|71||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-173", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722089), what: "split", ns: "test.foo", details: { before: { min: { a: 16.11151483141404 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|65, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, lastmod: Timestamp 4000|72, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 20.02617482801994 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|73, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 433.3806610330477 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 433.3806610330477 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, from: "shard0001", splitKeys: [ { a: 437.040103636678 } ], shardId: "test.foo-a_433.3806610330477", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee05f
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|73||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-174", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722116), what: "split", ns: "test.foo", details: { before: { min: { a: 433.3806610330477 }, max: { a: 441.0435238853461 }, lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|66||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 422.4151431966537 } dataWritten: 210389 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 421.3829957737632 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 25.60273139230473 } dataWritten: 210554 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 15ms sequenceNumber: 167 version: 4|65||4fd97a3b0d2fef4d6a507be2 based on: 4|63||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|1||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 25.60273139230473 } on: { a: 16.11151483141404 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|65, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 898.6566515076229 } dataWritten: 209965 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 895.9830446416969 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } dataWritten: 210261 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 168 version: 4|67||4fd97a3b0d2fef4d6a507be2 based on: 4|65||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|127||000000000000000000000000 min: { a: 473.1445991105042 } max: { a: 483.6281235892167 } on: { a: 477.2807394020033 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|67, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|28||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 689.5707127489441 } dataWritten: 210000 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 689.1834226467756 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|60||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 821.178966084225 } dataWritten: 210562 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 819.9687946310937 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|57||000000000000000000000000 min: { a: 933.0462189495814 } max: { a: 938.1160661714987 } dataWritten: 209884 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 936.9786477977971 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210468 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 192.5626158101032 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|65||000000000000000000000000 min: { a: 708.8986861220777 } max: { a: 714.0536251380356 } dataWritten: 210182 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 712.9023455651062 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|8||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 141.1884883168546 } dataWritten: 210651 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 140.2578248941303 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|17||000000000000000000000000 min: { a: 777.6503149863191 } max: { a: 784.2714953599016 } dataWritten: 210274 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 781.384144905905 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|177||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 284.9747465988205 } dataWritten: 210024 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 281.3187572520728 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 210144 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 361.934000338642 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|63||000000000000000000000000 min: { a: 331.4018789379612 } max: { a: 337.6965417950217 } dataWritten: 210661 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 334.9881483136292 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|158||000000000000000000000000 min: { a: 580.4600029065366 } max: { a: 590.8997745355827 } dataWritten: 209960 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 169 version: 4|69||4fd97a3b0d2fef4d6a507be2 based on: 4|67||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|158||000000000000000000000000 min: { a: 580.4600029065366 } max: { a: 590.8997745355827 } on: { a: 584.4225320226172 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|69, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|41||000000000000000000000000 min: { a: 269.785248844529 } max: { a: 277.1560315461681 } dataWritten: 210302 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 273.8590959235109 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|144||000000000000000000000000 min: { a: 761.349721153896 } max: { a: 773.3799848158397 } dataWritten: 209925 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 170 version: 4|71||4fd97a3b0d2fef4d6a507be2 based on: 4|69||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:21 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|144||000000000000000000000000 min: { a: 761.349721153896 } max: { a: 773.3799848158397 } on: { a: 765.2211241548246 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|71, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:21 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } dataWritten: 209829 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 341.4147434056176 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210050 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 192.5098087639168 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|12||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 603.53104016638 } dataWritten: 210453 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 602.9509062145426 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210705 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 644.5210317124642 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|41||000000000000000000000000 min: { a: 269.785248844529 } max: { a: 277.1560315461681 } dataWritten: 209989 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 273.8388235683147 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|21||000000000000000000000000 min: { a: 861.9626177544285 } max: { a: 868.5788679342879 } dataWritten: 210748 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 865.6862707214808 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210301 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 831.3014505498703 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|59||000000000000000000000000 min: { a: 886.5207670748756 } max: { a: 891.8750702869381 } dataWritten: 210593 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 890.2598189497339 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|71||000000000000000000000000 min: { a: 943.2489828660326 } max: { a: 948.0165404542549 } dataWritten: 210268 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 946.9597816803197 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|58||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 886.5207670748756 } dataWritten: 210398 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 886.2425144113429 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|14||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 417.3437896431063 } dataWritten: 210454 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 414.6489371887974 }
m30999| Thu Jun 14 01:45:21 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|61||000000000000000000000000 min: { a: 636.2085863336085 } max: { a: 640.7093733209429 } dataWritten: 209928 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:21 [conn] chunk not full enough to trigger auto-split { a: 640.0786439594735 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|49||000000000000000000000000 min: { a: 349.1094580993942 } max: { a: 353.2720479801309 } dataWritten: 210312 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 352.6081737633707 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 254.1395685736485 } dataWritten: 210094 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 251.7899555944023 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|65||000000000000000000000000 min: { a: 16.11151483141404 } max: { a: 25.60273139230473 } dataWritten: 209808 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 171 version: 4|73||4fd97a3b0d2fef4d6a507be2 based on: 4|71||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|65||000000000000000000000000 min: { a: 16.11151483141404 } max: { a: 25.60273139230473 } on: { a: 20.02617482801994 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|73, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|19||000000000000000000000000 min: { a: 433.3806610330477 } max: { a: 441.0435238853461 } dataWritten: 209766 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 172 version: 4|75||4fd97a3b0d2fef4d6a507be2 based on: 4|73||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|19||000000000000000000000000 min: { a: 433.3806610330477 } max: { a: 441.0435238853461 } on: { a: 437.040103636678 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|75, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210192 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 192.3930918431559 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|165||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 74.43717892117874 } dataWritten: 210053 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 66.37486853611429 } -->> { : 74.43717892117874 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 66.37486853611429 } -->> { : 74.43717892117874 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, from: "shard0001", splitKeys: [ { a: 70.06331619195872 } ], shardId: "test.foo-a_66.37486853611429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee060
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|75||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-175", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722132), what: "split", ns: "test.foo", details: { before: { min: { a: 66.37486853611429 }, max: { a: 74.43717892117874 }, lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 173 version: 4|77||4fd97a3b0d2fef4d6a507be2 based on: 4|75||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|165||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 74.43717892117874 } on: { a: 70.06331619195872 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|77, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210052 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 831.2264845310285 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 240.0709323500288 } dataWritten: 210470 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 240.0709323500288 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 237.6686644699433 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|60||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 821.178966084225 } dataWritten: 209810 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 821.178966084225 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 819.7772217032632 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|8||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 141.1884883168546 } dataWritten: 210543 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 141.1884883168546 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 140.1489447016414 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|28||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 689.5707127489441 } dataWritten: 209828 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 685.0292821001574 } -->> { : 689.5707127489441 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 688.9531072597292 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|60||000000000000000000000000 min: { a: 632.4786347534061 } max: { a: 636.2085863336085 } dataWritten: 210664 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 632.4786347534061 } -->> { : 636.2085863336085 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 635.9732237380945 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|48||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 991.2502100401695 } dataWritten: 209836 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 991.2502100401695 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 989.5002926537843 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 181.7281932506388 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 111.0431509615952 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 111.0431509615952 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 114.9662096443472 } ], shardId: "test.foo-a_111.0431509615952", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee061
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|77||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-176", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722256), what: "split", ns: "test.foo", details: { before: { min: { a: 111.0431509615952 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 114.9662096443472 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 4000|79, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 943.2489828660326 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 910.9608546053483 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 483.6281235892167 } -->> { : 490.1028421929578 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 744.9210849408088 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 744.9210849408088 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, from: "shard0001", splitKeys: [ { a: 748.6872188241756 } ], shardId: "test.foo-a_744.9210849408088", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee062
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|79||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-177", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722320), what: "split", ns: "test.foo", details: { before: { min: { a: 744.9210849408088 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 721.9923962351373 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 721.9923962351373 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, from: "shard0001", splitKeys: [ { a: 725.5771489434317 } ], shardId: "test.foo-a_721.9923962351373", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee063
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|81||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-178", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722343), what: "split", ns: "test.foo", details: { before: { min: { a: 721.9923962351373 }, max: { a: 729.8361633348899 }, lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|49||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 998.3975234740553 } dataWritten: 210543 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 995.2717404591116 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|32||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 181.7281932506388 } dataWritten: 210331 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 179.6009902681074 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|136||000000000000000000000000 min: { a: 111.0431509615952 } max: { a: 123.1918419151289 } dataWritten: 209819 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 174 version: 4|79||4fd97a3b0d2fef4d6a507be2 based on: 4|77||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|136||000000000000000000000000 min: { a: 111.0431509615952 } max: { a: 123.1918419151289 } on: { a: 114.9662096443472 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|79, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|70||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 943.2489828660326 } dataWritten: 210261 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 941.9277790144505 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|35||000000000000000000000000 min: { a: 910.9608546053483 } max: { a: 918.4259760765641 } dataWritten: 210687 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 914.8113452750008 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|6||000000000000000000000000 min: { a: 483.6281235892167 } max: { a: 490.1028421929578 } dataWritten: 209755 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 487.3409178613742 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { a: 744.9210849408088 } max: { a: 752.6019558395919 } dataWritten: 210672 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 175 version: 4|81||4fd97a3b0d2fef4d6a507be2 based on: 4|79||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|13||000000000000000000000000 min: { a: 744.9210849408088 } max: { a: 752.6019558395919 } on: { a: 748.6872188241756 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|81, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|168||000000000000000000000000 min: { a: 721.9923962351373 } max: { a: 729.8361633348899 } dataWritten: 210070 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 176 version: 4|83||4fd97a3b0d2fef4d6a507be2 based on: 4|81||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|168||000000000000000000000000 min: { a: 721.9923962351373 } max: { a: 729.8361633348899 } on: { a: 725.5771489434317 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|83, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } dataWritten: 210136 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 640.7093733209429 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, from: "shard0001", splitKeys: [ { a: 644.4017960752651 } ], shardId: "test.foo-a_640.7093733209429", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee064
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|83||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-179", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722411), what: "split", ns: "test.foo", details: { before: { min: { a: 640.7093733209429 }, max: { a: 648.6747268265868 }, lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 177 version: 4|85||4fd97a3b0d2fef4d6a507be2 based on: 4|83||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|173||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 648.6747268265868 } on: { a: 644.4017960752651 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|85, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } dataWritten: 210056 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 825.0347288716866 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|20||000000000000000000000000 min: { a: 284.9747465988205 } max: { a: 289.7137301985317 } dataWritten: 209908 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 284.9747465988205 } -->> { : 289.7137301985317 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 288.9115260467031 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 873.8718881199745 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 873.8718881199745 } -->> { : 882.331873780809 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, from: "shard0001", splitKeys: [ { a: 877.8438233640235 } ], shardId: "test.foo-a_873.8718881199745", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee065
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|85||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-180", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722515), what: "split", ns: "test.foo", details: { before: { min: { a: 873.8718881199745 }, max: { a: 882.331873780809 }, lastmod: Timestamp 2000|59, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|59||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 882.331873780809 } dataWritten: 209851 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 178 version: 4|87||4fd97a3b0d2fef4d6a507be2 based on: 4|85||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|59||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 882.331873780809 } on: { a: 877.8438233640235 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|87, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|39||000000000000000000000000 min: { a: 668.6362621623331 } max: { a: 678.3563510786536 } dataWritten: 209731 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 668.6362621623331 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 668.6362621623331 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 668.6362621623331 }, max: { a: 678.3563510786536 }, from: "shard0001", splitKeys: [ { a: 672.2870891659105 } ], shardId: "test.foo-a_668.6362621623331", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee066
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|87||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-181", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722526), what: "split", ns: "test.foo", details: { before: { min: { a: 668.6362621623331 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 4000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 672.2870891659105 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 4000|89, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 179 version: 4|89||4fd97a3b0d2fef4d6a507be2 based on: 4|87||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|39||000000000000000000000000 min: { a: 668.6362621623331 } max: { a: 678.3563510786536 } on: { a: 672.2870891659105 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|89, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|24||000000000000000000000000 min: { a: 955.9182567868356 } max: { a: 960.5824651536831 } dataWritten: 210152 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 955.9182567868356 } -->> { : 960.5824651536831 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 959.7366792180193 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|46||000000000000000000000000 min: { a: 25.60273139230473 } max: { a: 30.85678137192671 } dataWritten: 210055 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 29.26919649688264 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 25.60273139230473 } -->> { : 30.85678137192671 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|141||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 599.2155367136296 } dataWritten: 209925 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 599.2155367136296 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 590.8997745355827 } -->> { : 599.2155367136296 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, from: "shard0001", splitKeys: [ { a: 594.3878051880898 } ], shardId: "test.foo-a_590.8997745355827", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee067
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|89||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-182", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722552), what: "split", ns: "test.foo", details: { before: { min: { a: 590.8997745355827 }, max: { a: 599.2155367136296 }, lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 180 version: 4|91||4fd97a3b0d2fef4d6a507be2 based on: 4|89||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|141||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 599.2155367136296 } on: { a: 594.3878051880898 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|91, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 209795 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 297.5736951209433 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|54||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 756.637103632288 } dataWritten: 210459 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 756.2889622490917 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|13||000000000000000000000000 min: { a: 603.53104016638 } max: { a: 610.6068178358934 } dataWritten: 210599 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 607.1432304675706 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|151||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 571.914212129846 } dataWritten: 210522 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 756.637103632288 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 603.53104016638 } -->> { : 610.6068178358934 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 571.914212129846 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 563.897889911273 } -->> { : 571.914212129846 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, from: "shard0001", splitKeys: [ { a: 567.3645636091692 } ], shardId: "test.foo-a_563.897889911273", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee068
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|91||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-183", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722600), what: "split", ns: "test.foo", details: { before: { min: { a: 563.897889911273 }, max: { a: 571.914212129846 }, lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 194.8927257678023 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 194.8927257678023 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, from: "shard0001", splitKeys: [ { a: 198.5601903660538 } ], shardId: "test.foo-a_194.8927257678023", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee069
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|93||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-184", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722626), what: "split", ns: "test.foo", details: { before: { min: { a: 194.8927257678023 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 2000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 910.9608546053483 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 664.5574284897642 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 960.5824651536831 } -->> { : 964.9150523226922 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 373.3849373054079 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 43.98990958864879 } -->> { : 47.94081917961535 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 510.639225969218 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 977.1164746659301 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, from: "shard0001", splitKeys: [ { a: 980.667776515926 } ], shardId: "test.foo-a_977.1164746659301", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee06a
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|95||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-185", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722810), what: "split", ns: "test.foo", details: { before: { min: { a: 977.1164746659301 }, max: { a: 985.6773819217475 }, lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 447.8806134954977 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 447.8806134954977 } -->> { : 456.4586339452165 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, from: "shard0001", splitKeys: [ { a: 451.8120411874291 } ], shardId: "test.foo-a_447.8806134954977", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee06b
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|97||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-186", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722830), what: "split", ns: "test.foo", details: { before: { min: { a: 447.8806134954977 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 970.39026226179 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 584.4225320226172 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 225.5962198744838 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 216.8904302452864 } -->> { : 225.5962198744838 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, from: "shard0001", splitKeys: [ { a: 220.5716558736682 } ], shardId: "test.foo-a_216.8904302452864", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee06c
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|99||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-187", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722873), what: "split", ns: "test.foo", details: { before: { min: { a: 216.8904302452864 }, max: { a: 225.5962198744838 }, lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 220.5716558736682 }, max: { a: 225.5962198744838 }, lastmod: Timestamp 4000|101, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 898.6566515076229 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:45:22 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 47.94081917961535 } -->> { : 57.56464668319472 }
m30001| Thu Jun 14 01:45:22 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, from: "shard0001", splitKeys: [ { a: 51.90923851177054 } ], shardId: "test.foo-a_47.94081917961535", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:22 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7232a28802daeee06d
m30001| Thu Jun 14 01:45:22 [conn2] splitChunk accepted at version 4|101||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:22 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:22-188", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652722901), what: "split", ns: "test.foo", details: { before: { min: { a: 47.94081917961535 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:22 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 703.7520953686671 } -->> { : 708.8986861220777 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 358.3343339611492 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 861.9626177544285 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 294.0222214358918 } -->> { : 300.0603324337813 }
m30001| Thu Jun 14 01:45:22 [conn2] request split points lookup for chunk test.foo { : 970.39026226179 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 400.6101810646703 } -->> { : 411.0287894698923 }
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 181 version: 4|93||4fd97a3b0d2fef4d6a507be2 based on: 4|91||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|151||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 571.914212129846 } on: { a: 567.3645636091692 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|93, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 193032 splitThreshold: 943718
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|23||000000000000000000000000 min: { a: 194.8927257678023 } max: { a: 204.0577089538382 } dataWritten: 210773 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 182 version: 4|95||4fd97a3b0d2fef4d6a507be2 based on: 4|93||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|23||000000000000000000000000 min: { a: 194.8927257678023 } max: { a: 204.0577089538382 } on: { a: 198.5601903660538 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|95, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210362 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 831.0443660622748 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|35||000000000000000000000000 min: { a: 910.9608546053483 } max: { a: 918.4259760765641 } dataWritten: 209718 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 914.7270141454493 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|43||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 797.6352444405507 } dataWritten: 210448 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 794.0480383513316 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } dataWritten: 209789 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 213.6626694280229 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|181||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 664.5574284897642 } dataWritten: 209919 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 661.0128507593523 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|25||000000000000000000000000 min: { a: 960.5824651536831 } max: { a: 964.9150523226922 } dataWritten: 210313 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 963.9670996368462 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 210689 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 361.702529366461 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|27||000000000000000000000000 min: { a: 373.3849373054079 } max: { a: 378.3565272980204 } dataWritten: 209761 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 377.4225830162546 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|43||000000000000000000000000 min: { a: 43.98990958864879 } max: { a: 47.94081917961535 } dataWritten: 209784 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 47.20233068386781 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|36||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 510.639225969218 } dataWritten: 209922 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 509.9987689782791 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } dataWritten: 209982 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 183 version: 4|97||4fd97a3b0d2fef4d6a507be2 based on: 4|95||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|153||000000000000000000000000 min: { a: 977.1164746659301 } max: { a: 985.6773819217475 } on: { a: 980.667776515926 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|97, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|17||000000000000000000000000 min: { a: 447.8806134954977 } max: { a: 456.4586339452165 } dataWritten: 209932 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 400.6101810646703 } -->> { : 411.0287894698923 }
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 184 version: 4|99||4fd97a3b0d2fef4d6a507be2 based on: 4|97||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|17||000000000000000000000000 min: { a: 447.8806134954977 } max: { a: 456.4586339452165 } on: { a: 451.8120411874291 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|99, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, from: "shard0001", splitKeys: [ { a: 404.1458625239371 } ], shardId: "test.foo-a_400.6101810646703", configdb: "localhost:30000" }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } dataWritten: 210472 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 824.9773622617407 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|45||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 977.1164746659301 } dataWritten: 209760 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 974.5119714066462 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|69||000000000000000000000000 min: { a: 584.4225320226172 } max: { a: 590.8997745355827 } dataWritten: 210316 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 587.8836896919513 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|161||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 225.5962198744838 } dataWritten: 210451 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 185 version: 4|101||4fd97a3b0d2fef4d6a507be2 based on: 4|99||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|161||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 225.5962198744838 } on: { a: 220.5716558736682 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|101, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { a: 898.6566515076229 } max: { a: 905.2934559328332 } dataWritten: 210382 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 902.0623138823873 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 57.56464668319472 } dataWritten: 209962 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 186 version: 4|103||4fd97a3b0d2fef4d6a507be2 based on: 4|101||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:22 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|148||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 57.56464668319472 } on: { a: 51.90923851177054 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|103, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:22 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|64||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 708.8986861220777 } dataWritten: 209718 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 707.4367881672047 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|68||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 358.3343339611492 } dataWritten: 210690 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 357.0264609760338 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|21||000000000000000000000000 min: { a: 861.9626177544285 } max: { a: 868.5788679342879 } dataWritten: 210281 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 865.3721036776705 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|10||000000000000000000000000 min: { a: 294.0222214358918 } max: { a: 300.0603324337813 } dataWritten: 209863 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 297.4988023441245 }
m30999| Thu Jun 14 01:45:22 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|45||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 977.1164746659301 } dataWritten: 209797 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:22 [conn] chunk not full enough to trigger auto-split { a: 974.5073749111609 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|170||000000000000000000000000 min: { a: 400.6101810646703 } max: { a: 411.0287894698923 } dataWritten: 210515 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee06e
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|103||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-189", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723018), what: "split", ns: "test.foo", details: { before: { min: { a: 400.6101810646703 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 404.1458625239371 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 4000|105, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 187 version: 4|105||4fd97a3b0d2fef4d6a507be2 based on: 4|103||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|170||000000000000000000000000 min: { a: 400.6101810646703 } max: { a: 411.0287894698923 } on: { a: 404.1458625239371 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|105, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { a: 5.826356493812579 } max: { a: 12.55217658236718 } dataWritten: 210660 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:23 [conn11] request split points lookup for chunk test.foo { : 5.826356493812579 } -->> { : 12.55217658236718 }
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 9.383728285439098 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|27||000000000000000000000000 min: { a: 373.3849373054079 } max: { a: 378.3565272980204 } dataWritten: 209776 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 373.3849373054079 } -->> { : 378.3565272980204 }
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 377.3057655363821 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 463.2766201180535 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 694.6501944983177 } -->> { : 703.7520953686671 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 694.6501944983177 } -->> { : 703.7520953686671 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, from: "shard0001", splitKeys: [ { a: 698.4329238257609 } ], shardId: "test.foo-a_694.6501944983177", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee06f
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|105||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-190", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723093), what: "split", ns: "test.foo", details: { before: { min: { a: 694.6501944983177 }, max: { a: 703.7520953686671 }, lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 20.02617482801994 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 131.8115136015859 } -->> { : 136.5735165062921 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 542.4296058071777 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, from: "shard0001", splitKeys: [ { a: 545.8257932837977 } ], shardId: "test.foo-a_542.4296058071777", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee070
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|107||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-191", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723159), what: "split", ns: "test.foo", details: { before: { min: { a: 542.4296058071777 }, max: { a: 552.1925267328988 }, lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 545.8257932837977 }, max: { a: 552.1925267328988 }, lastmod: Timestamp 4000|109, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 39.89992532263464 } -->> { : 43.98990958864879 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 490.1028421929578 } -->> { : 498.2021416153332 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|183||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 463.2766201180535 } dataWritten: 210614 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, from: "shard0001", splitKeys: [ { a: 493.6797279933101 } ], shardId: "test.foo-a_490.1028421929578", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 459.967076614464 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|149||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 703.7520953686671 } dataWritten: 210265 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 188 version: 4|107||4fd97a3b0d2fef4d6a507be2 based on: 4|105||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|149||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 703.7520953686671 } on: { a: 698.4329238257609 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|107, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|73||000000000000000000000000 min: { a: 20.02617482801994 } max: { a: 25.60273139230473 } dataWritten: 209764 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 23.48533495758076 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|47||000000000000000000000000 min: { a: 131.8115136015859 } max: { a: 136.5735165062921 } dataWritten: 209938 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 135.4175551950891 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|129||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 552.1925267328988 } dataWritten: 210371 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee071
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 189 version: 4|109||4fd97a3b0d2fef4d6a507be2 based on: 4|107||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|129||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 552.1925267328988 } on: { a: 545.8257932837977 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|109, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|42||000000000000000000000000 min: { a: 39.89992532263464 } max: { a: 43.98990958864879 } dataWritten: 210203 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 43.32998831707613 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } dataWritten: 210596 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|109||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-192", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723196), what: "split", ns: "test.foo", details: { before: { min: { a: 490.1028421929578 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 190 version: 4|111||4fd97a3b0d2fef4d6a507be2 based on: 4|109||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|7||000000000000000000000000 min: { a: 490.1028421929578 } max: { a: 498.2021416153332 } on: { a: 493.6797279933101 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|111, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 163.3701742796004 } -->> { : 167.6382092456179 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 563.897889911273 } -->> { : 567.3645636091692 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 383.7239757530736 } -->> { : 387.7659705009871 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 447.8806134954977 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 146.6503611644078 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 146.6503611644078 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, from: "shard0001", splitKeys: [ { a: 150.1357777689222 } ], shardId: "test.foo-a_146.6503611644078", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee072
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|111||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-193", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723388), what: "split", ns: "test.foo", details: { before: { min: { a: 146.6503611644078 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 150.1357777689222 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 4000|113, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|51||000000000000000000000000 min: { a: 163.3701742796004 } max: { a: 167.6382092456179 } dataWritten: 210449 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 166.7536050099025 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|92||000000000000000000000000 min: { a: 563.897889911273 } max: { a: 567.3645636091692 } dataWritten: 210677 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 567.1537134199725 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|40||000000000000000000000000 min: { a: 383.7239757530736 } max: { a: 387.7659705009871 } dataWritten: 209747 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 387.155643258347 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 210474 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 361.5013461888514 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 447.8806134954977 } dataWritten: 210602 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 444.5770970297124 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|124||000000000000000000000000 min: { a: 146.6503611644078 } max: { a: 159.2125242384949 } dataWritten: 210202 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 191 version: 4|113||4fd97a3b0d2fef4d6a507be2 based on: 4|111||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 821.178966084225 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 790.298943411581 } -->> { : 797.6352444405507 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, from: "shard0001", splitKeys: [ { a: 793.7120312511385 } ], shardId: "test.foo-a_790.298943411581", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee073
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|113||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-194", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723443), what: "split", ns: "test.foo", details: { before: { min: { a: 790.298943411581 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 2000|43, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 150.1357777689222 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 150.1357777689222 } -->> { : 159.2125242384949 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 150.1357777689222 }, max: { a: 159.2125242384949 }, from: "shard0001", splitKeys: [ { a: 153.684305048146 } ], shardId: "test.foo-a_150.1357777689222", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee074
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|115||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-195", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723480), what: "split", ns: "test.foo", details: { before: { min: { a: 150.1357777689222 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 4000|113, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 807.4105833931693 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 807.4105833931693 } -->> { : 815.7684070742035 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 807.4105833931693 }, max: { a: 815.7684070742035 }, from: "shard0001", splitKeys: [ { a: 810.8918013325706 } ], shardId: "test.foo-a_807.4105833931693", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee075
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|117||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-196", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723525), what: "split", ns: "test.foo", details: { before: { min: { a: 807.4105833931693 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 4000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|124||000000000000000000000000 min: { a: 146.6503611644078 } max: { a: 159.2125242384949 } on: { a: 150.1357777689222 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|113, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|60||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 821.178966084225 } dataWritten: 209850 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 819.3870147407733 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|43||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 797.6352444405507 } dataWritten: 210358 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 192 version: 4|115||4fd97a3b0d2fef4d6a507be2 based on: 4|113||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|43||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 797.6352444405507 } on: { a: 793.7120312511385 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|115, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|113||000000000000000000000000 min: { a: 150.1357777689222 } max: { a: 159.2125242384949 } dataWritten: 209984 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 193 version: 4|117||4fd97a3b0d2fef4d6a507be2 based on: 4|115||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|113||000000000000000000000000 min: { a: 150.1357777689222 } max: { a: 159.2125242384949 } on: { a: 153.684305048146 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|117, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|19||000000000000000000000000 min: { a: 807.4105833931693 } max: { a: 815.7684070742035 } dataWritten: 209910 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 194 version: 4|119||4fd97a3b0d2fef4d6a507be2 based on: 4|117||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|19||000000000000000000000000 min: { a: 807.4105833931693 } max: { a: 815.7684070742035 } on: { a: 810.8918013325706 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|119, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|115||000000000000000000000000 min: { a: 793.7120312511385 } max: { a: 797.6352444405507 } dataWritten: 210490 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 793.7120312511385 } -->> { : 797.6352444405507 }
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 797.2054299321211 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|139||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 92.91917824556573 } dataWritten: 210768 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 83.77384564239721 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, from: "shard0001", splitKeys: [ { a: 87.41840730135154 } ], shardId: "test.foo-a_83.77384564239721", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee076
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|119||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-197", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723580), what: "split", ns: "test.foo", details: { before: { min: { a: 83.77384564239721 }, max: { a: 92.91917824556573 }, lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 87.41840730135154 }, max: { a: 92.91917824556573 }, lastmod: Timestamp 4000|121, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 195 version: 4|121||4fd97a3b0d2fef4d6a507be2 based on: 4|119||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|139||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 92.91917824556573 } on: { a: 87.41840730135154 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|121, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 848.2332478721062 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 840.7121644073931 } -->> { : 848.2332478721062 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, from: "shard0001", splitKeys: [ { a: 843.8858257205128 } ], shardId: "test.foo-a_840.7121644073931", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee077
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|121||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-198", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723597), what: "split", ns: "test.foo", details: { before: { min: { a: 840.7121644073931 }, max: { a: 848.2332478721062 }, lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 777.6503149863191 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 943.2489828660326 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 793.7120312511385 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 498.2021416153332 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, from: "shard0001", splitKeys: [ { a: 501.5945768521381 } ], shardId: "test.foo-a_498.2021416153332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee078
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|123||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-199", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723727), what: "split", ns: "test.foo", details: { before: { min: { a: 498.2021416153332 }, max: { a: 506.5947777056855 }, lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 501.5945768521381 }, max: { a: 506.5947777056855 }, lastmod: Timestamp 4000|125, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 51.90923851177054 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 127.4590140914801 } -->> { : 131.8115136015859 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 289.7137301985317 } -->> { : 294.0222214358918 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 141.1884883168546 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 918.4259760765641 } -->> { : 927.6813889109981 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 918.4259760765641 } -->> { : 927.6813889109981 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, from: "shard0001", splitKeys: [ { a: 921.5853246168082 } ], shardId: "test.foo-a_918.4259760765641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee079
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|125||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-200", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723795), what: "split", ns: "test.foo", details: { before: { min: { a: 918.4259760765641 }, max: { a: 927.6813889109981 }, lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 756.637103632288 } -->> { : 761.349721153896 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 610.6068178358934 } -->> { : 615.3266278873516 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 331.4018789379612 } -->> { : 337.6965417950217 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|175||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 848.2332478721062 } dataWritten: 209839 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 196 version: 4|123||4fd97a3b0d2fef4d6a507be2 based on: 4|121||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|175||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 848.2332478721062 } on: { a: 843.8858257205128 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|123, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|17||000000000000000000000000 min: { a: 777.6503149863191 } max: { a: 784.2714953599016 } dataWritten: 210650 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 780.9200355196833 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|71||000000000000000000000000 min: { a: 943.2489828660326 } max: { a: 948.0165404542549 } dataWritten: 209809 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 946.6882762769817 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 209936 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 361.4437656381725 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|114||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 793.7120312511385 } dataWritten: 210007 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 793.6225421735965 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } dataWritten: 210094 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 197 version: 4|125||4fd97a3b0d2fef4d6a507be2 based on: 4|123||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|155||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 506.5947777056855 } on: { a: 501.5945768521381 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|125, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|102||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 51.90923851177054 } dataWritten: 210731 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 51.60511939011436 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|46||000000000000000000000000 min: { a: 127.4590140914801 } max: { a: 131.8115136015859 } dataWritten: 210108 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 131.0126935053001 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|21||000000000000000000000000 min: { a: 289.7137301985317 } max: { a: 294.0222214358918 } dataWritten: 210343 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 292.9907330542263 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|8||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 141.1884883168546 } dataWritten: 209947 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 139.7627371706759 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|137||000000000000000000000000 min: { a: 918.4259760765641 } max: { a: 927.6813889109981 } dataWritten: 210476 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 198 version: 4|127||4fd97a3b0d2fef4d6a507be2 based on: 4|125||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|137||000000000000000000000000 min: { a: 918.4259760765641 } max: { a: 927.6813889109981 } on: { a: 921.5853246168082 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|127, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|55||000000000000000000000000 min: { a: 756.637103632288 } max: { a: 761.349721153896 } dataWritten: 210358 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 759.9659762160223 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|62||000000000000000000000000 min: { a: 610.6068178358934 } max: { a: 615.3266278873516 } dataWritten: 210269 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 613.7322627026367 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|63||000000000000000000000000 min: { a: 331.4018789379612 } max: { a: 337.6965417950217 } dataWritten: 209775 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 331.4018789379612 } -->> { : 337.6965417950217 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 331.4018789379612 }, max: { a: 337.6965417950217 }, from: "shard0001", splitKeys: [ { a: 334.3168575448847 } ], shardId: "test.foo-a_331.4018789379612", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee07a
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|127||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-201", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723873), what: "split", ns: "test.foo", details: { before: { min: { a: 331.4018789379612 }, max: { a: 337.6965417950217 }, lastmod: Timestamp 4000|63, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 199 version: 4|129||4fd97a3b0d2fef4d6a507be2 based on: 4|127||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|63||000000000000000000000000 min: { a: 331.4018789379612 } max: { a: 337.6965417950217 } on: { a: 334.3168575448847 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|129, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|48||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 991.2502100401695 } dataWritten: 210610 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 991.2502100401695 }
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 989.0398102302121 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } dataWritten: 210267 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 571.914212129846 } -->> { : 580.4600029065366 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, from: "shard0001", splitKeys: [ { a: 575.2102660145707 } ], shardId: "test.foo-a_571.914212129846", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee07b
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|129||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-202", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723897), what: "split", ns: "test.foo", details: { before: { min: { a: 571.914212129846 }, max: { a: 580.4600029065366 }, lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 200 version: 4|131||4fd97a3b0d2fef4d6a507be2 based on: 4|129||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|157||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 580.4600029065366 } on: { a: 575.2102660145707 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|131, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210490 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 830.6742455784272 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 344.8762285660836 } -->> { : 349.1094580993942 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|48||000000000000000000000000 min: { a: 344.8762285660836 } max: { a: 349.1094580993942 } dataWritten: 210707 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 348.2530982939144 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|27||000000000000000000000000 min: { a: 373.3849373054079 } max: { a: 378.3565272980204 } dataWritten: 210087 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 373.3849373054079 } -->> { : 378.3565272980204 }
m30001| Thu Jun 14 01:45:23 [conn2] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 284.9747465988205 }
m30001| Thu Jun 14 01:45:23 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 277.1560315461681 } -->> { : 284.9747465988205 }
m30001| Thu Jun 14 01:45:23 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, from: "shard0001", splitKeys: [ { a: 280.6827052136106 } ], shardId: "test.foo-a_277.1560315461681", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:23 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7332a28802daeee07c
m30999| Thu Jun 14 01:45:23 [conn] chunk not full enough to trigger auto-split { a: 377.0240236026323 }
m30999| Thu Jun 14 01:45:23 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|177||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 284.9747465988205 } dataWritten: 210401 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:23 [conn2] splitChunk accepted at version 4|131||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:23 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:23-203", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652723973), what: "split", ns: "test.foo", details: { before: { min: { a: 277.1560315461681 }, max: { a: 284.9747465988205 }, lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:23 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 269.785248844529 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 66.37486853611429 } -->> { : 70.06331619195872 }
m30999| Thu Jun 14 01:45:23 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 201 version: 4|133||4fd97a3b0d2fef4d6a507be2 based on: 4|131||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:23 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|177||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 284.9747465988205 } on: { a: 280.6827052136106 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|133, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:23 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|40||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 269.785248844529 } dataWritten: 209956 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 267.3192636972269 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|76||000000000000000000000000 min: { a: 66.37486853611429 } max: { a: 70.06331619195872 } dataWritten: 210587 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 69.68667786461002 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|51||000000000000000000000000 min: { a: 321.3459727153073 } max: { a: 327.5292321238884 } dataWritten: 210344 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 321.3459727153073 } -->> { : 327.5292321238884 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 324.8114979883389 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 991.2502100401695 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, from: "shard0001", splitKeys: [ { a: 994.7222740534528 } ], shardId: "test.foo-a_991.2502100401695", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee07d
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|133||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-204", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724076), what: "split", ns: "test.foo", details: { before: { min: { a: 991.2502100401695 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 2000|49, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 644.4017960752651 } -->> { : 648.6747268265868 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 980.667776515926 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 141.1884883168546 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 417.3437896431063 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 417.3437896431063 } -->> { : 422.4151431966537 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 970.39026226179 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 456.4586339452165 } -->> { : 463.2766201180535 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 456.4586339452165 } -->> { : 463.2766201180535 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, from: "shard0001", splitKeys: [ { a: 459.7315330482733 } ], shardId: "test.foo-a_456.4586339452165", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee07e
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|135||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-205", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724185), what: "split", ns: "test.foo", details: { before: { min: { a: 456.4586339452165 }, max: { a: 463.2766201180535 }, lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 644.4017960752651 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 664.5574284897642 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 657.3538695372831 } -->> { : 664.5574284897642 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, from: "shard0001", splitKeys: [ { a: 660.6896106858891 } ], shardId: "test.foo-a_657.3538695372831", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee07f
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|137||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-206", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724243), what: "split", ns: "test.foo", details: { before: { min: { a: 657.3538695372831 }, max: { a: 664.5574284897642 }, lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|49||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 998.3975234740553 } dataWritten: 210470 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 202 version: 4|135||4fd97a3b0d2fef4d6a507be2 based on: 4|133||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|49||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 998.3975234740553 } on: { a: 994.7222740534528 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|135, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 192070 splitThreshold: 943718
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } dataWritten: 210014 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 90.23378527340731 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|85||000000000000000000000000 min: { a: 644.4017960752651 } max: { a: 648.6747268265868 } dataWritten: 210632 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 647.7210633276695 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|97||000000000000000000000000 min: { a: 980.667776515926 } max: { a: 985.6773819217475 } dataWritten: 210305 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 984.2289665550494 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|8||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 141.1884883168546 } dataWritten: 210672 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 139.6851829002293 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|14||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 417.3437896431063 } dataWritten: 210447 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 414.0821551118692 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|66||000000000000000000000000 min: { a: 417.3437896431063 } max: { a: 422.4151431966537 } dataWritten: 209915 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 420.6690019228294 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|45||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 977.1164746659301 } dataWritten: 210497 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 974.1620546109089 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 210568 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 830.6180299199963 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|183||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 463.2766201180535 } dataWritten: 209917 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 203 version: 4|137||4fd97a3b0d2fef4d6a507be2 based on: 4|135||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|183||000000000000000000000000 min: { a: 456.4586339452165 } max: { a: 463.2766201180535 } on: { a: 459.7315330482733 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|137, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 210382 splitThreshold: 943718
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|84||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 644.4017960752651 } dataWritten: 210223 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 643.9497624227961 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|181||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 664.5574284897642 } dataWritten: 210716 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 204 version: 4|139||4fd97a3b0d2fef4d6a507be2 based on: 4|137||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|181||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 664.5574284897642 } on: { a: 660.6896106858891 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|139, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|58||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 873.8718881199745 } dataWritten: 210425 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 873.8718881199745 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 871.7829610895545 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 545.8257932837977 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 567.3645636091692 } -->> { : 571.914212129846 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 545.8257932837977 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 943.2489828660326 } -->> { : 948.0165404542549 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 387.7659705009871 } -->> { : 392.8718206829087 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 437.040103636678 } -->> { : 441.0435238853461 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 802.4966878498034 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 765.2211241548246 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 765.2211241548246 } -->> { : 773.3799848158397 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 765.2211241548246 }, max: { a: 773.3799848158397 }, from: "shard0001", splitKeys: [ { a: 768.6399184840259 } ], shardId: "test.foo-a_765.2211241548246", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee080
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|139||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-207", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724410), what: "split", ns: "test.foo", details: { before: { min: { a: 765.2211241548246 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 4000|71, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 501.5945768521381 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 660.6896106858891 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 198.5601903660538 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 531.7597013546634 } -->> { : 536.0462960134931 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|108||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 545.8257932837977 } dataWritten: 210752 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 545.5293183782908 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|93||000000000000000000000000 min: { a: 567.3645636091692 } max: { a: 571.914212129846 } dataWritten: 210577 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 570.7059982927827 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|109||000000000000000000000000 min: { a: 545.8257932837977 } max: { a: 552.1925267328988 } dataWritten: 210043 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 549.1120701914166 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } dataWritten: 209941 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 90.15310322716053 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|71||000000000000000000000000 min: { a: 943.2489828660326 } max: { a: 948.0165404542549 } dataWritten: 210053 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 946.4890287474911 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|41||000000000000000000000000 min: { a: 387.7659705009871 } max: { a: 392.8718206829087 } dataWritten: 209739 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 391.0463380330911 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|75||000000000000000000000000 min: { a: 437.040103636678 } max: { a: 441.0435238853461 } dataWritten: 210591 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 440.3643313364252 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|10||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 802.4966878498034 } dataWritten: 209969 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 800.7988846974129 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|71||000000000000000000000000 min: { a: 765.2211241548246 } max: { a: 773.3799848158397 } dataWritten: 209868 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 205 version: 4|141||4fd97a3b0d2fef4d6a507be2 based on: 4|139||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|71||000000000000000000000000 min: { a: 765.2211241548246 } max: { a: 773.3799848158397 } on: { a: 768.6399184840259 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|141, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 189127 splitThreshold: 943718
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|125||000000000000000000000000 min: { a: 501.5945768521381 } max: { a: 506.5947777056855 } dataWritten: 209844 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 504.4186779808853 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|138||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 660.6896106858891 } dataWritten: 210276 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 660.6480281222485 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|95||000000000000000000000000 min: { a: 198.5601903660538 } max: { a: 204.0577089538382 } dataWritten: 210224 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 201.8817808755888 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|22||000000000000000000000000 min: { a: 531.7597013546634 } max: { a: 536.0462960134931 } dataWritten: 210770 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 534.8318844236367 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 802.4966878498034 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|10||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 802.4966878498034 } dataWritten: 209809 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 800.7570224184825 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|70||000000000000000000000000 min: { a: 938.1160661714987 } max: { a: 943.2489828660326 } dataWritten: 210566 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 938.1160661714987 } -->> { : 943.2489828660326 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 941.3789486482926 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|90||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 594.3878051880898 } dataWritten: 209760 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 594.3878051880898 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 593.8901193136843 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|67||000000000000000000000000 min: { a: 477.2807394020033 } max: { a: 483.6281235892167 } dataWritten: 210133 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 477.2807394020033 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 477.2807394020033 } -->> { : 483.6281235892167 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 477.2807394020033 }, max: { a: 483.6281235892167 }, from: "shard0001", splitKeys: [ { a: 480.2747403619077 } ], shardId: "test.foo-a_477.2807394020033", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee081
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|141||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-208", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724607), what: "split", ns: "test.foo", details: { before: { min: { a: 477.2807394020033 }, max: { a: 483.6281235892167 }, lastmod: Timestamp 4000|67, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 206 version: 4|143||4fd97a3b0d2fef4d6a507be2 based on: 4|141||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|67||000000000000000000000000 min: { a: 477.2807394020033 } max: { a: 483.6281235892167 } on: { a: 480.2747403619077 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|143, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 210084 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 207.4251268374692 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|131||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 738.6198156338151 } dataWritten: 210688 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 729.8361633348899 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 729.8361633348899 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, from: "shard0001", splitKeys: [ { a: 732.9348251743502 } ], shardId: "test.foo-a_729.8361633348899", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee082
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|143||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-209", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724640), what: "split", ns: "test.foo", details: { before: { min: { a: 729.8361633348899 }, max: { a: 738.6198156338151 }, lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 732.9348251743502 }, max: { a: 738.6198156338151 }, lastmod: Timestamp 4000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 207 version: 4|145||4fd97a3b0d2fef4d6a507be2 based on: 4|143||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|131||000000000000000000000000 min: { a: 729.8361633348899 } max: { a: 738.6198156338151 } on: { a: 732.9348251743502 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|145, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 136.5735165062921 } -->> { : 141.1884883168546 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 198.5601903660538 } -->> { : 204.0577089538382 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 910.9608546053483 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 910.9608546053483 } -->> { : 918.4259760765641 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, from: "shard0001", splitKeys: [ { a: 914.1361338478089 } ], shardId: "test.foo-a_910.9608546053483", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee083
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|145||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-210", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724717), what: "split", ns: "test.foo", details: { before: { min: { a: 910.9608546053483 }, max: { a: 918.4259760765641 }, lastmod: Timestamp 2000|35, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 777.6503149863191 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 777.6503149863191 } -->> { : 784.2714953599016 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 777.6503149863191 }, max: { a: 784.2714953599016 }, from: "shard0001", splitKeys: [ { a: 780.6933276463033 } ], shardId: "test.foo-a_777.6503149863191", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee084
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|147||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-211", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724740), what: "split", ns: "test.foo", details: { before: { min: { a: 777.6503149863191 }, max: { a: 784.2714953599016 }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 994.7222740534528 } -->> { : 998.3975234740553 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 545.8257932837977 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 545.8257932837977 } -->> { : 552.1925267328988 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 545.8257932837977 }, max: { a: 552.1925267328988 }, from: "shard0001", splitKeys: [ { a: 548.9817180888258 } ], shardId: "test.foo-a_545.8257932837977", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee085
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|149||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-212", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724799), what: "split", ns: "test.foo", details: { before: { min: { a: 545.8257932837977 }, max: { a: 552.1925267328988 }, lastmod: Timestamp 4000|109, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 628.1995001147562 } -->> { : 632.4786347534061 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|8||000000000000000000000000 min: { a: 136.5735165062921 } max: { a: 141.1884883168546 } dataWritten: 210519 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 139.5338604256962 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|95||000000000000000000000000 min: { a: 198.5601903660538 } max: { a: 204.0577089538382 } dataWritten: 210599 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 201.8325700765957 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|35||000000000000000000000000 min: { a: 910.9608546053483 } max: { a: 918.4259760765641 } dataWritten: 210259 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 208 version: 4|147||4fd97a3b0d2fef4d6a507be2 based on: 4|145||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|35||000000000000000000000000 min: { a: 910.9608546053483 } max: { a: 918.4259760765641 } on: { a: 914.1361338478089 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|147, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|17||000000000000000000000000 min: { a: 777.6503149863191 } max: { a: 784.2714953599016 } dataWritten: 209725 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 209 version: 4|149||4fd97a3b0d2fef4d6a507be2 based on: 4|147||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|17||000000000000000000000000 min: { a: 777.6503149863191 } max: { a: 784.2714953599016 } on: { a: 780.6933276463033 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|149, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|135||000000000000000000000000 min: { a: 994.7222740534528 } max: { a: 998.3975234740553 } dataWritten: 210487 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 998.1367313216036 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|109||000000000000000000000000 min: { a: 545.8257932837977 } max: { a: 552.1925267328988 } dataWritten: 210257 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 210 version: 4|151||4fd97a3b0d2fef4d6a507be2 based on: 4|149||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|109||000000000000000000000000 min: { a: 545.8257932837977 } max: { a: 552.1925267328988 } on: { a: 548.9817180888258 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|151, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|14||000000000000000000000000 min: { a: 628.1995001147562 } max: { a: 632.4786347534061 } dataWritten: 210222 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 631.3014745243286 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|41||000000000000000000000000 min: { a: 387.7659705009871 } max: { a: 392.8718206829087 } dataWritten: 210495 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 387.7659705009871 } -->> { : 392.8718206829087 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 390.9373518759137 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|108||000000000000000000000000 min: { a: 542.4296058071777 } max: { a: 545.8257932837977 } dataWritten: 210137 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 542.4296058071777 } -->> { : 545.8257932837977 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 545.4215000695358 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|42||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 790.298943411581 } dataWritten: 210224 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 784.2714953599016 } -->> { : 790.298943411581 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 787.5697837093666 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|100||000000000000000000000000 min: { a: 216.8904302452864 } max: { a: 220.5716558736682 } dataWritten: 209832 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 216.8904302452864 } -->> { : 220.5716558736682 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 220.0568689185003 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|93||000000000000000000000000 min: { a: 567.3645636091692 } max: { a: 571.914212129846 } dataWritten: 210097 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 567.3645636091692 } -->> { : 571.914212129846 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 570.5713329147677 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|30||000000000000000000000000 min: { a: 827.5642418995561 } max: { a: 833.5963963333859 } dataWritten: 209892 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 827.5642418995561 } -->> { : 833.5963963333859 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 830.5243109595592 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|159||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 955.9182567868356 } dataWritten: 210512 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 948.0165404542549 } -->> { : 955.9182567868356 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 948.0165404542549 } -->> { : 955.9182567868356 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, from: "shard0001", splitKeys: [ { a: 951.1531632632295 } ], shardId: "test.foo-a_948.0165404542549", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee086
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|151||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-213", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724857), what: "split", ns: "test.foo", details: { before: { min: { a: 948.0165404542549 }, max: { a: 955.9182567868356 }, lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 211 version: 4|153||4fd97a3b0d2fef4d6a507be2 based on: 4|151||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|159||000000000000000000000000 min: { a: 948.0165404542549 } max: { a: 955.9182567868356 } on: { a: 951.1531632632295 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|153, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 689.5707127489441 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 181.7281932506388 } -->> { : 188.6698238706465 }
m30001| Thu Jun 14 01:45:24 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 181.7281932506388 } -->> { : 188.6698238706465 }
m30001| Thu Jun 14 01:45:24 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, from: "shard0001", splitKeys: [ { a: 184.9464054233513 } ], shardId: "test.foo-a_181.7281932506388", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:24 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|29||000000000000000000000000 min: { a: 689.5707127489441 } max: { a: 694.6501944983177 } dataWritten: 209790 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 692.5886728427078 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|33||000000000000000000000000 min: { a: 181.7281932506388 } max: { a: 188.6698238706465 } dataWritten: 210160 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7432a28802daeee087
m30001| Thu Jun 14 01:45:24 [conn2] splitChunk accepted at version 4|153||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:24 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:24-214", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652724941), what: "split", ns: "test.foo", details: { before: { min: { a: 181.7281932506388 }, max: { a: 188.6698238706465 }, lastmod: Timestamp 2000|33, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:24 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:24 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 212 version: 4|155||4fd97a3b0d2fef4d6a507be2 based on: 4|153||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:24 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|33||000000000000000000000000 min: { a: 181.7281932506388 } max: { a: 188.6698238706465 } on: { a: 184.9464054233513 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|155, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:24 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|50||000000000000000000000000 min: { a: 315.9151551096841 } max: { a: 321.3459727153073 } dataWritten: 210123 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 315.9151551096841 } -->> { : 321.3459727153073 }
m30999| Thu Jun 14 01:45:24 [conn] chunk not full enough to trigger auto-split { a: 319.2095494190272 }
m30999| Thu Jun 14 01:45:24 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } dataWritten: 210038 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:24 [conn2] request split points lookup for chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 603.53104016638 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 980.667776515926 } -->> { : 985.6773819217475 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 882.331873780809 } -->> { : 886.5207670748756 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 964.9150523226922 } -->> { : 970.39026226179 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 114.9662096443472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 114.9662096443472 } -->> { : 123.1918419151289 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 114.9662096443472 }, max: { a: 123.1918419151289 }, from: "shard0001", splitKeys: [ { a: 118.3157678917793 } ], shardId: "test.foo-a_114.9662096443472", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee088
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 824.4599978799424 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|12||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 603.53104016638 } dataWritten: 209861 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 602.2587826258007 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|97||000000000000000000000000 min: { a: 980.667776515926 } max: { a: 985.6773819217475 } dataWritten: 209936 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 983.9835236826526 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|58||000000000000000000000000 min: { a: 882.331873780809 } max: { a: 886.5207670748756 } dataWritten: 210587 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 885.5884727820697 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|44||000000000000000000000000 min: { a: 964.9150523226922 } max: { a: 970.39026226179 } dataWritten: 210678 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 967.9911034928951 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 210370 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 361.1535720540613 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|79||000000000000000000000000 min: { a: 114.9662096443472 } max: { a: 123.1918419151289 } dataWritten: 210393 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|155||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-215", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725070), what: "split", ns: "test.foo", details: { before: { min: { a: 114.9662096443472 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 4000|79, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 531.7597013546634 } -->> { : 536.0462960134931 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 886.5207670748756 } -->> { : 891.8750702869381 }
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 213 version: 4|157||4fd97a3b0d2fef4d6a507be2 based on: 4|155||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|79||000000000000000000000000 min: { a: 114.9662096443472 } max: { a: 123.1918419151289 } on: { a: 118.3157678917793 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|157, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|22||000000000000000000000000 min: { a: 531.7597013546634 } max: { a: 536.0462960134931 } dataWritten: 210129 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 534.7100070065618 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|59||000000000000000000000000 min: { a: 886.5207670748756 } max: { a: 891.8750702869381 } dataWritten: 210001 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 889.5984712541298 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|122||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 843.8858257205128 } dataWritten: 210284 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 843.8858257205128 }
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 843.6160691193425 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|179||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 685.0292821001574 } dataWritten: 210429 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 678.3563510786536 } -->> { : 685.0292821001574 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 678.3563510786536 } -->> { : 685.0292821001574 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, from: "shard0001", splitKeys: [ { a: 681.3003030169281 } ], shardId: "test.foo-a_678.3563510786536", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee089
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|157||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-216", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725111), what: "split", ns: "test.foo", details: { before: { min: { a: 678.3563510786536 }, max: { a: 685.0292821001574 }, lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 214 version: 4|159||4fd97a3b0d2fef4d6a507be2 based on: 4|157||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|179||000000000000000000000000 min: { a: 678.3563510786536 } max: { a: 685.0292821001574 } on: { a: 681.3003030169281 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|159, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|141||000000000000000000000000 min: { a: 768.6399184840259 } max: { a: 773.3799848158397 } dataWritten: 209755 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 768.6399184840259 } -->> { : 773.3799848158397 }
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 771.6699008382188 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 123.1918419151289 } -->> { : 127.4590140914801 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 427.2300955074828 } -->> { : 433.3806610330477 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 427.2300955074828 } -->> { : 433.3806610330477 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, from: "shard0001", splitKeys: [ { a: 430.2130944220548 } ], shardId: "test.foo-a_427.2300955074828", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08a
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|159||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-217", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725163), what: "split", ns: "test.foo", details: { before: { min: { a: 427.2300955074828 }, max: { a: 433.3806610330477 }, lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 636.2085863336085 } -->> { : 640.7093733209429 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 848.2332478721062 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 848.2332478721062 } -->> { : 855.8703567421647 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, from: "shard0001", splitKeys: [ { a: 851.468355264985 } ], shardId: "test.foo-a_848.2332478721062", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08b
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|161||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-218", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725194), what: "split", ns: "test.foo", details: { before: { min: { a: 848.2332478721062 }, max: { a: 855.8703567421647 }, lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 459.7315330482733 } -->> { : 463.2766201180535 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 199109 splitThreshold: 943718
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|4||000000000000000000000000 min: { a: 123.1918419151289 } max: { a: 127.4590140914801 } dataWritten: 210263 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 125.965858278783 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|18||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 433.3806610330477 } dataWritten: 210625 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 215 version: 4|161||4fd97a3b0d2fef4d6a507be2 based on: 4|159||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|18||000000000000000000000000 min: { a: 427.2300955074828 } max: { a: 433.3806610330477 } on: { a: 430.2130944220548 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|161, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|61||000000000000000000000000 min: { a: 636.2085863336085 } max: { a: 640.7093733209429 } dataWritten: 210468 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 639.298415603138 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|176||000000000000000000000000 min: { a: 848.2332478721062 } max: { a: 855.8703567421647 } dataWritten: 210763 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 216 version: 4|163||4fd97a3b0d2fef4d6a507be2 based on: 4|161||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|176||000000000000000000000000 min: { a: 848.2332478721062 } max: { a: 855.8703567421647 } on: { a: 851.468355264985 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|163, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|137||000000000000000000000000 min: { a: 459.7315330482733 } max: { a: 463.2766201180535 } dataWritten: 210516 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 462.8309289798291 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } dataWritten: 210035 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 188.6698238706465 } -->> { : 194.8927257678023 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, from: "shard0001", splitKeys: [ { a: 191.5307698720086 } ], shardId: "test.foo-a_188.6698238706465", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08c
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|163||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-219", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725229), what: "split", ns: "test.foo", details: { before: { min: { a: 188.6698238706465 }, max: { a: 194.8927257678023 }, lastmod: Timestamp 2000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 217 version: 4|165||4fd97a3b0d2fef4d6a507be2 based on: 4|163||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|22||000000000000000000000000 min: { a: 188.6698238706465 } max: { a: 194.8927257678023 } on: { a: 191.5307698720086 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|165, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|95||000000000000000000000000 min: { a: 198.5601903660538 } max: { a: 204.0577089538382 } dataWritten: 210363 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 198.5601903660538 } -->> { : 204.0577089538382 }
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 201.7317970111895 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 404.1458625239371 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 404.1458625239371 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 404.1458625239371 }, max: { a: 411.0287894698923 }, from: "shard0001", splitKeys: [ { a: 407.0796926580036 } ], shardId: "test.foo-a_404.1458625239371", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08d
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|165||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-220", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725319), what: "split", ns: "test.foo", details: { before: { min: { a: 404.1458625239371 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 4000|105, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 748.6872188241756 } -->> { : 752.6019558395919 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 685.0292821001574 } -->> { : 689.5707127489441 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 181.7281932506388 } -->> { : 184.9464054233513 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 331.4018789379612 } -->> { : 334.3168575448847 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 873.8718881199745 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 660.6896106858891 } -->> { : 664.5574284897642 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 898.6566515076229 } -->> { : 905.2934559328332 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|105||000000000000000000000000 min: { a: 404.1458625239371 } max: { a: 411.0287894698923 } dataWritten: 210438 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 218 version: 4|167||4fd97a3b0d2fef4d6a507be2 based on: 4|165||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|105||000000000000000000000000 min: { a: 404.1458625239371 } max: { a: 411.0287894698923 } on: { a: 407.0796926580036 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|167, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|81||000000000000000000000000 min: { a: 748.6872188241756 } max: { a: 752.6019558395919 } dataWritten: 210522 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 751.5046865601606 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|28||000000000000000000000000 min: { a: 685.0292821001574 } max: { a: 689.5707127489441 } dataWritten: 210183 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 688.0837421585233 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|154||000000000000000000000000 min: { a: 181.7281932506388 } max: { a: 184.9464054233513 } dataWritten: 209988 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 184.863169014616 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } dataWritten: 210515 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 90.02344931711015 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 209629 splitThreshold: 943718
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|128||000000000000000000000000 min: { a: 331.4018789379612 } max: { a: 334.3168575448847 } dataWritten: 210119 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 334.0794958214925 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|58||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 873.8718881199745 } dataWritten: 210702 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 871.5746973163834 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|139||000000000000000000000000 min: { a: 660.6896106858891 } max: { a: 664.5574284897642 } dataWritten: 210081 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 663.6338956251591 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { a: 898.6566515076229 } max: { a: 905.2934559328332 } dataWritten: 210362 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 898.6566515076229 } -->> { : 905.2934559328332 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, from: "shard0001", splitKeys: [ { a: 901.6037051063506 } ], shardId: "test.foo-a_898.6566515076229", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08e
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|167||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-221", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725427), what: "split", ns: "test.foo", details: { before: { min: { a: 898.6566515076229 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 219 version: 4|169||4fd97a3b0d2fef4d6a507be2 based on: 4|167||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|5||000000000000000000000000 min: { a: 898.6566515076229 } max: { a: 905.2934559328332 } on: { a: 901.6037051063506 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|169, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|149||000000000000000000000000 min: { a: 780.6933276463033 } max: { a: 784.2714953599016 } dataWritten: 210561 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 780.6933276463033 } -->> { : 784.2714953599016 }
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 783.6355347918063 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|53||000000000000000000000000 min: { a: 521.3538677091974 } max: { a: 526.919018850918 } dataWritten: 210276 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 521.3538677091974 } -->> { : 526.919018850918 }
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 524.1789381537635 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 225.5962198744838 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 225.5962198744838 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, from: "shard0001", splitKeys: [ { a: 228.7035403403385 } ], shardId: "test.foo-a_225.5962198744838", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee08f
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|169||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-222", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725480), what: "split", ns: "test.foo", details: { before: { min: { a: 225.5962198744838 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 228.7035403403385 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 4000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 861.9626177544285 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:45:25 [conn2] max number of requested split points reached (2) before the end of chunk test.foo { : 861.9626177544285 } -->> { : 868.5788679342879 }
m30001| Thu Jun 14 01:45:25 [conn2] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, from: "shard0001", splitKeys: [ { a: 864.7746195980726 } ], shardId: "test.foo-a_861.9626177544285", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:25 [conn2] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7532a28802daeee090
m30001| Thu Jun 14 01:45:25 [conn2] splitChunk accepted at version 4|171||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:25 [conn2] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:25-223", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48969", time: new Date(1339652725492), what: "split", ns: "test.foo", details: { before: { min: { a: 861.9626177544285 }, max: { a: 868.5788679342879 }, lastmod: Timestamp 2000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:25 [conn2] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 70.06331619195872 } -->> { : 74.43717892117874 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 258.6206493525194 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 74.43717892117874 } -->> { : 78.73686651492073 }
m30001| Thu Jun 14 01:45:25 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.6, filling with zeroes...
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|162||000000000000000000000000 min: { a: 225.5962198744838 } max: { a: 233.8565055904641 } dataWritten: 210507 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 220 version: 4|171||4fd97a3b0d2fef4d6a507be2 based on: 4|169||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|162||000000000000000000000000 min: { a: 225.5962198744838 } max: { a: 233.8565055904641 } on: { a: 228.7035403403385 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|171, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|21||000000000000000000000000 min: { a: 861.9626177544285 } max: { a: 868.5788679342879 } dataWritten: 210321 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 221 version: 4|173||4fd97a3b0d2fef4d6a507be2 based on: 4|171||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:25 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|21||000000000000000000000000 min: { a: 861.9626177544285 } max: { a: 868.5788679342879 } on: { a: 864.7746195980726 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|173, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|77||000000000000000000000000 min: { a: 70.06331619195872 } max: { a: 74.43717892117874 } dataWritten: 209751 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 73.08665494623578 }
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:25 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|31||000000000000000000000000 min: { a: 258.6206493525194 } max: { a: 264.0825842924789 } dataWritten: 209878 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 261.8224808473594 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|32||000000000000000000000000 min: { a: 74.43717892117874 } max: { a: 78.73686651492073 } dataWritten: 209919 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 77.34581510290018 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|140||000000000000000000000000 min: { a: 765.2211241548246 } max: { a: 768.6399184840259 } dataWritten: 210511 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 768.4360214915042 }
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 765.2211241548246 } -->> { : 768.6399184840259 }
m30999| Thu Jun 14 01:45:25 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|36||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 383.7239757530736 } dataWritten: 209818 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:25 [conn] chunk not full enough to trigger auto-split { a: 381.2627322479224 }
m30001| Thu Jun 14 01:45:25 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:23096853 211ms
m30001| Thu Jun 14 01:45:25 [conn2] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 383.7239757530736 }
m30001| Thu Jun 14 01:45:26 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:23408866 309ms
m30999| Thu Jun 14 01:45:26 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|104||000000000000000000000000 min: { a: 400.6101810646703 } max: { a: 404.1458625239371 } dataWritten: 210611 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:26 [conn] chunk not full enough to trigger auto-split { a: 403.3975965834779 }
m30001| Thu Jun 14 01:45:26 [conn2] request split points lookup for chunk test.foo { : 400.6101810646703 } -->> { : 404.1458625239371 }
m30001| Thu Jun 14 01:45:26 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:23644770 133ms
m30001| Thu Jun 14 01:45:26 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:23790361 143ms
m30001| Thu Jun 14 01:45:26 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:24014314 221ms
m30001| Thu Jun 14 01:45:26 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:24222909 205ms
m30999| Thu Jun 14 01:45:26 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|30||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 258.6206493525194 } dataWritten: 209952 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:26 [conn2] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 258.6206493525194 }
m30999| Thu Jun 14 01:45:26 [conn] chunk not full enough to trigger auto-split { a: 257.0743062660343 }
m30001| Thu Jun 14 01:45:27 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25039753 814ms
m30001| Thu Jun 14 01:45:27 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25158989 116ms
m30001| Thu Jun 14 01:45:27 [clientcursormon] mem (MB) res:505 virt:1238 mapped:1023
m30000| Thu Jun 14 01:45:28 [clientcursormon] mem (MB) res:153 virt:349 mapped:160
m30999| Thu Jun 14 01:45:28 [Balancer] Refreshing MaxChunkSize: 1
m30000| Thu Jun 14 01:45:28 [conn3] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30999:1339652667:1804289383" } update: { $set: { ping: new Date(1339652727299) } } nscanned:1 nupdated:1 keyUpdates:1 locks(micros) r:3135 w:723482 720ms
m30999| Thu Jun 14 01:45:28 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:45:28 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a780d2fef4d6a507be9" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a580d2fef4d6a507be6" } }
m30999| Thu Jun 14 01:45:28 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:45:27 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:45:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a780d2fef4d6a507be9
m30999| Thu Jun 14 01:45:28 [Balancer] *** start balancing round
m30001| Thu Jun 14 01:45:28 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25297443 136ms
m30001| Thu Jun 14 01:45:28 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25453229 153ms
m30001| Thu Jun 14 01:45:28 [conn5] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:199294 reslen:1777 203ms
m30999| Thu Jun 14 01:45:28 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:45:28 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:28 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:28 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:45:28 [Balancer] shard0000
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] shard0001
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 4000|72, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_20.02617482801994", lastmod: Timestamp 4000|73, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 20.02617482801994 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_34.95140019143683", lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_43.98990958864879", lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_51.90923851177054", lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_61.76919454003927", lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_70.06331619195872", lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_78.73686651492073", lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_87.41840730135154", lastmod: Timestamp 4000|121, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 87.41840730135154 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_106.0311910436654", lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_114.9662096443472", lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_118.3157678917793", lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_127.4590140914801", lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_131.8115136015859", lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_141.1884883168546", lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_150.1357777689222", lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_153.684305048146", lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_163.3701742796004", lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_184.9464054233513", lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_191.5307698720086", lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_198.5601903660538", lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_220.5716558736682", lastmod: Timestamp 4000|101, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 220.5716558736682 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_228.7035403403385", lastmod: Timestamp 4000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 228.7035403403385 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_258.6206493525194", lastmod: Timestamp 4000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 258.6206493525194 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_280.6827052136106", lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_289.7137301985317", lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_331.4018789379612", lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_334.3168575448847", lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_349.1094580993942", lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_373.3849373054079", lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_387.7659705009871", lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_404.1458625239371", lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_407.0796926580036", lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_430.2130944220548", lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_437.040103636678", lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_451.8120411874291", lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_459.7315330482733", lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_477.2807394020033", lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_480.2747403619077", lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_493.6797279933101", lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_501.5945768521381", lastmod: Timestamp 4000|125, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 501.5945768521381 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_510.639225969218", lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_531.7597013546634", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_536.0462960134931", lastmod: Timestamp 4000|23, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 536.0462960134931 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_545.8257932837977", lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_548.9817180888258", lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_567.3645636091692", lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_575.2102660145707", lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_584.4225320226172", lastmod: Timestamp 4000|69, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 584.4225320226172 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_594.3878051880898", lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_603.53104016638", lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_628.1995001147562", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_632.4786347534061", lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_636.2085863336085", lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_644.4017960752651", lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_652.9401841699823", lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_660.6896106858891", lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_668.6362621623331", lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_672.2870891659105", lastmod: Timestamp 4000|89, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 672.2870891659105 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_681.3003030169281", lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_689.5707127489441", lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_698.4329238257609", lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_725.5771489434317", lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_732.9348251743502", lastmod: Timestamp 4000|145, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 732.9348251743502 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_748.6872188241756", lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_756.637103632288", lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_765.2211241548246", lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_768.6399184840259", lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_777.6503149863191", lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_780.6933276463033", lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_793.7120312511385", lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_802.4966878498034", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_807.4105833931693", lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_810.8918013325706", lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_843.8858257205128", lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_851.468355264985", lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_864.7746195980726", lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_877.8438233640235", lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_886.5207670748756", lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_901.6037051063506", lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_914.1361338478089", lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_921.5853246168082", lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_951.1531632632295", lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_960.5824651536831", lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_980.667776515926", lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_994.7222740534528", lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] ----
m30999| Thu Jun 14 01:45:28 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:45:28 [Balancer] donor : 211 chunks on shard0001
m30999| Thu Jun 14 01:45:28 [Balancer] receiver : 3 chunks on shard0000
m30999| Thu Jun 14 01:45:28 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:45:28 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:28 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:28 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:45:28 [Balancer] shard0000
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, max: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, max: { _id: ObjectId('4fd97a3d05a35677eff23246') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, max: { _id: ObjectId('4fd97a3d05a35677eff23611') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, max: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, max: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24176') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, max: { _id: ObjectId('4fd97a3d05a35677eff24541') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24727') }, max: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, max: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, max: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25295') }, max: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25663') }, max: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, max: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, max: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff26598') }, max: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26964') }, max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27105') }, max: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, max: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, max: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, max: { _id: ObjectId('4fd97a3f05a35677eff28226') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, max: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff287d7') }, max: { _id: ObjectId('4fd97a4005a35677eff289bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, max: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28f71') }, max: { _id: ObjectId('4fd97a4005a35677eff29159') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2933f') }, max: { _id: ObjectId('4fd97a4005a35677eff29523') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29708') }, max: { _id: ObjectId('4fd97a4005a35677eff298ed') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, max: { _id: ObjectId('4fd97a4005a35677eff29cba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, max: { _id: ObjectId('4fd97a4005a35677eff2a086') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, max: { _id: ObjectId('4fd97a4005a35677eff2a450') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a636') }, max: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, max: { _id: ObjectId('4fd97a4105a35677eff2abea') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2add0') }, max: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, max: { _id: ObjectId('4fd97a4105a35677eff2b387') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, max: { _id: ObjectId('4fd97a4105a35677eff2b757') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, max: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, max: { _id: ObjectId('4fd97a4205a35677eff2beee') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')", lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, max: { _id: ObjectId('4fd97a4205a35677eff2c687') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, max: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')", lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, max: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d008') }, max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, max: { _id: ObjectId('4fd97a4305a35677eff2d986') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, max: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, max: { _id: ObjectId('4fd97a4305a35677eff2e127') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, max: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, max: { _id: ObjectId('4fd97a4305a35677eff2f052') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f239') }, max: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f603') }, max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, max: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3016a') }, max: { _id: ObjectId('4fd97a4405a35677eff30351') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30537') }, max: { _id: ObjectId('4fd97a4405a35677eff30721') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30907') }, max: { _id: ObjectId('4fd97a4405a35677eff30aef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, max: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff310a7') }, max: { _id: ObjectId('4fd97a4405a35677eff3128e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31473') }, max: { _id: ObjectId('4fd97a4405a35677eff3165b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31841') }, max: { _id: ObjectId('4fd97a4405a35677eff31a28') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, max: { _id: ObjectId('4fd97a4405a35677eff31df3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31fda') }, max: { _id: ObjectId('4fd97a4405a35677eff321bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff323a4') }, max: { _id: ObjectId('4fd97a4405a35677eff3258c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32774') }, max: { _id: ObjectId('4fd97a4505a35677eff32958') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, max: { _id: ObjectId('4fd97a4505a35677eff32d23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, max: { _id: ObjectId('4fd97a4505a35677eff330f5') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff332d9') }, max: { _id: ObjectId('4fd97a4505a35677eff334c2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff336ab') }, max: { _id: ObjectId('4fd97a4505a35677eff33891') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33a77') }, max: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33e41') }, max: { _id: ObjectId('4fd97a4605a35677eff34026') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff3420d') }, max: { _id: ObjectId('4fd97a4605a35677eff343f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff345d9') }, max: { _id: ObjectId('4fd97a4605a35677eff347c1') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff349a9') }, max: { _id: ObjectId('4fd97a4705a35677eff34b90') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34d79') }, max: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35147') }, max: { _id: ObjectId('4fd97a4705a35677eff3532c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35511') }, max: { _id: ObjectId('4fd97a4705a35677eff356fa') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff358e1') }, max: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35cab') }, max: { _id: ObjectId('4fd97a4705a35677eff35e91') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3607a') }, max: { _id: ObjectId('4fd97a4805a35677eff3625f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36447') }, max: { _id: ObjectId('4fd97a4805a35677eff3662c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36814') }, max: { _id: ObjectId('4fd97a4805a35677eff369f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36be0') }, max: { _id: ObjectId('4fd97a4805a35677eff36dca') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36faf') }, max: { _id: ObjectId('4fd97a4805a35677eff37195') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3737a') }, max: { _id: ObjectId('4fd97a4805a35677eff37560') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37747') }, max: { _id: ObjectId('4fd97a4905a35677eff3792f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37b15') }, max: { _id: ObjectId('4fd97a4905a35677eff37cff') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, max: { _id: ObjectId('4fd97a4905a35677eff380d0') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff382b9') }, max: { _id: ObjectId('4fd97a4905a35677eff3849e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')", lastmod: Timestamp 1000|185, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38684') }, max: { _id: ObjectId('4fd97a4905a35677eff38869') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')", lastmod: Timestamp 1000|187, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, max: { _id: ObjectId('4fd97a4905a35677eff38c32') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')", lastmod: Timestamp 1000|189, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, max: { _id: ObjectId('4fd97a4905a35677eff39001') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')", lastmod: Timestamp 1000|191, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff391e8') }, max: { _id: ObjectId('4fd97a4905a35677eff393cf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')", lastmod: Timestamp 1000|193, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff395b6') }, max: { _id: ObjectId('4fd97a4905a35677eff3979b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')", lastmod: Timestamp 1000|195, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39985') }, max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')", lastmod: Timestamp 1000|197, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, max: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')", lastmod: Timestamp 1000|199, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')", lastmod: Timestamp 1000|201, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')", lastmod: Timestamp 1000|203, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')", lastmod: Timestamp 1000|205, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:28 [Balancer] shard0001
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, max: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, max: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23246') }, max: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23611') }, max: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24176') }, max: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24541') }, max: { _id: ObjectId('4fd97a3d05a35677eff24727') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, max: { _id: ObjectId('4fd97a3e05a35677eff25295') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, max: { _id: ObjectId('4fd97a3e05a35677eff25663') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, max: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, max: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, max: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, max: { _id: ObjectId('4fd97a3e05a35677eff26598') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, max: { _id: ObjectId('4fd97a3f05a35677eff26964') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, max: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27105') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, max: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, max: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, max: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff28226') }, max: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, max: { _id: ObjectId('4fd97a4005a35677eff287d7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff289bf') }, max: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, max: { _id: ObjectId('4fd97a4005a35677eff28f71') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29159') }, max: { _id: ObjectId('4fd97a4005a35677eff2933f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29523') }, max: { _id: ObjectId('4fd97a4005a35677eff29708') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff298ed') }, max: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29cba') }, max: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a086') }, max: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a450') }, max: { _id: ObjectId('4fd97a4105a35677eff2a636') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, max: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2abea') }, max: { _id: ObjectId('4fd97a4105a35677eff2add0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b387') }, max: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b757') }, max: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, max: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2beee') }, max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')", lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c687') }, max: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, max: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')", lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, max: { _id: ObjectId('4fd97a4205a35677eff2d008') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')", lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')", lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2d986') }, max: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')", lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, max: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e127') }, max: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')", lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')", lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')", lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f052') }, max: { _id: ObjectId('4fd97a4305a35677eff2f239') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, max: { _id: ObjectId('4fd97a4305a35677eff2f603') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')", lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, max: { _id: ObjectId('4fd97a4405a35677eff3016a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30351') }, max: { _id: ObjectId('4fd97a4405a35677eff30537') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')", lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30721') }, max: { _id: ObjectId('4fd97a4405a35677eff30907') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30aef') }, max: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')", lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, max: { _id: ObjectId('4fd97a4405a35677eff310a7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3128e') }, max: { _id: ObjectId('4fd97a4405a35677eff31473') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3165b') }, max: { _id: ObjectId('4fd97a4405a35677eff31841') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31a28') }, max: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31df3') }, max: { _id: ObjectId('4fd97a4405a35677eff31fda') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff321bf') }, max: { _id: ObjectId('4fd97a4405a35677eff323a4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3258c') }, max: { _id: ObjectId('4fd97a4505a35677eff32774') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32958') }, max: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32d23') }, max: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff330f5') }, max: { _id: ObjectId('4fd97a4505a35677eff332d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff334c2') }, max: { _id: ObjectId('4fd97a4505a35677eff336ab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff33891') }, max: { _id: ObjectId('4fd97a4605a35677eff33a77') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')", lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, max: { _id: ObjectId('4fd97a4605a35677eff33e41') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff34026') }, max: { _id: ObjectId('4fd97a4605a35677eff3420d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff343f3') }, max: { _id: ObjectId('4fd97a4605a35677eff345d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')", lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff347c1') }, max: { _id: ObjectId('4fd97a4605a35677eff349a9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34b90') }, max: { _id: ObjectId('4fd97a4705a35677eff34d79') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, max: { _id: ObjectId('4fd97a4705a35677eff35147') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff3532c') }, max: { _id: ObjectId('4fd97a4705a35677eff35511') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff356fa') }, max: { _id: ObjectId('4fd97a4705a35677eff358e1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, max: { _id: ObjectId('4fd97a4705a35677eff35cab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35e91') }, max: { _id: ObjectId('4fd97a4805a35677eff3607a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3625f') }, max: { _id: ObjectId('4fd97a4805a35677eff36447') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3662c') }, max: { _id: ObjectId('4fd97a4805a35677eff36814') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff369f9') }, max: { _id: ObjectId('4fd97a4805a35677eff36be0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36dca') }, max: { _id: ObjectId('4fd97a4805a35677eff36faf') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37195') }, max: { _id: ObjectId('4fd97a4805a35677eff3737a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37560') }, max: { _id: ObjectId('4fd97a4905a35677eff37747') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3792f') }, max: { _id: ObjectId('4fd97a4905a35677eff37b15') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37cff') }, max: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff380d0') }, max: { _id: ObjectId('4fd97a4905a35677eff382b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3849e') }, max: { _id: ObjectId('4fd97a4905a35677eff38684') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')", lastmod: Timestamp 1000|186, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38869') }, max: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')", lastmod: Timestamp 1000|188, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38c32') }, max: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')", lastmod: Timestamp 1000|190, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff39001') }, max: { _id: ObjectId('4fd97a4905a35677eff391e8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')", lastmod: Timestamp 1000|192, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff393cf') }, max: { _id: ObjectId('4fd97a4905a35677eff395b6') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')", lastmod: Timestamp 1000|194, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3979b') }, max: { _id: ObjectId('4fd97a4a05a35677eff39985') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')", lastmod: Timestamp 1000|196, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, max: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')", lastmod: Timestamp 1000|198, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')", lastmod: Timestamp 1000|200, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')", lastmod: Timestamp 1000|202, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')", lastmod: Timestamp 1000|204, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:28 [Balancer] ----
m30999| Thu Jun 14 01:45:28 [Balancer] collection : test.mrShardedOut
m30999| Thu Jun 14 01:45:28 [Balancer] donor : 103 chunks on shard0000
m30999| Thu Jun 14 01:45:28 [Balancer] receiver : 103 chunks on shard0000
m30999| Thu Jun 14 01:45:28 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ a: 1.0 })
m30001| Thu Jun 14 01:45:28 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25598801 142ms
m30999| Thu Jun 14 01:45:28 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|143||000000000000000000000000 min: { a: 480.2747403619077 } max: { a: 483.6281235892167 } dataWritten: 209934 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:28 [conn5] request split points lookup for chunk test.foo { : 480.2747403619077 } -->> { : 483.6281235892167 }
m30999| Thu Jun 14 01:45:28 [conn] chunk not full enough to trigger auto-split { a: 483.3691637491542 }
m30001| Thu Jun 14 01:45:28 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:25925201 214ms
m30000| Thu Jun 14 01:45:28 [conn10] end connection 127.0.0.1:60400 (16 connections now open)
m30999| Thu Jun 14 01:45:28 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:45:28 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:45:28 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ a: 1.0 })
m30999| Thu Jun 14 01:45:28 [Balancer] *** End of balancing round
m30001| Thu Jun 14 01:45:29 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:26240066 312ms
m30001| Thu Jun 14 01:45:29 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:26481267 238ms
m30001| Thu Jun 14 01:45:29 [conn5] request split points lookup for chunk test.foo { : 698.4329238257609 } -->> { : 703.7520953686671 }
m30999| Thu Jun 14 01:45:29 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|107||000000000000000000000000 min: { a: 698.4329238257609 } max: { a: 703.7520953686671 } dataWritten: 210228 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:29 [conn] chunk not full enough to trigger auto-split { a: 701.4743755922797 }
m30001| Thu Jun 14 01:45:29 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:26946338 362ms
m30001| Thu Jun 14 01:45:30 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:27381445 435ms
m30999| Thu Jun 14 01:45:30 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|14||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 417.3437896431063 } dataWritten: 209865 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:30 [conn5] request split points lookup for chunk test.foo { : 411.0287894698923 } -->> { : 417.3437896431063 }
m30001| Thu Jun 14 01:45:30 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 411.0287894698923 } -->> { : 417.3437896431063 }
m30001| Thu Jun 14 01:45:30 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, from: "shard0001", splitKeys: [ { a: 413.7945438036655 } ], shardId: "test.foo-a_411.0287894698923", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:30 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:30 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7a32a28802daeee091
m30001| Thu Jun 14 01:45:30 [conn5] splitChunk accepted at version 4|173||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:30 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:30-224", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652730177), what: "split", ns: "test.foo", details: { before: { min: { a: 411.0287894698923 }, max: { a: 417.3437896431063 }, lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 411.0287894698923 }, max: { a: 413.7945438036655 }, lastmod: Timestamp 4000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 413.7945438036655 }, max: { a: 417.3437896431063 }, lastmod: Timestamp 4000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:30 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:30 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 222 version: 4|175||4fd97a3b0d2fef4d6a507be2 based on: 4|173||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:30 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|14||000000000000000000000000 min: { a: 411.0287894698923 } max: { a: 417.3437896431063 } on: { a: 413.7945438036655 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:30 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|175, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:30 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:30 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:30 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:30 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:27495393 111ms
m30001| Thu Jun 14 01:45:30 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:27645270 147ms
m30001| Thu Jun 14 01:45:30 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:28054227 406ms
m30001| Thu Jun 14 01:45:31 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:28227527 173ms
m30001| Thu Jun 14 01:45:31 [conn5] request split points lookup for chunk test.foo { : 594.3878051880898 } -->> { : 599.2155367136296 }
m30999| Thu Jun 14 01:45:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|91||000000000000000000000000 min: { a: 594.3878051880898 } max: { a: 599.2155367136296 } dataWritten: 210096 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:31 [conn] chunk not full enough to trigger auto-split { a: 597.4022308754418 }
m30001| Thu Jun 14 01:45:31 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:28540538 310ms
m30001| Thu Jun 14 01:45:31 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:28732580 186ms
m30001| Thu Jun 14 01:45:31 [conn5] request split points lookup for chunk test.foo { : 264.0825842924789 } -->> { : 269.785248844529 }
m30001| Thu Jun 14 01:45:31 [conn5] request split points lookup for chunk test.foo { : 363.6779080113047 } -->> { : 369.0981926515277 }
m30999| Thu Jun 14 01:45:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|40||000000000000000000000000 min: { a: 264.0825842924789 } max: { a: 269.785248844529 } dataWritten: 210704 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:31 [conn] chunk not full enough to trigger auto-split { a: 267.0079718384953 }
m30999| Thu Jun 14 01:45:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|38||000000000000000000000000 min: { a: 363.6779080113047 } max: { a: 369.0981926515277 } dataWritten: 210650 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:31 [conn] chunk not full enough to trigger auto-split { a: 366.4250086564446 }
m30001| Thu Jun 14 01:45:31 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:29123393 388ms
m30001| Thu Jun 14 01:45:31 [conn5] request split points lookup for chunk test.foo { : 998.3975234740553 } -->> { : MaxKey }
m30001| Thu Jun 14 01:45:31 [conn5] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 240.0709323500288 }
m30999| Thu Jun 14 01:45:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { a: 998.3975234740553 } max: { a: MaxKey } dataWritten: 200984 splitThreshold: 943718
m30999| Thu Jun 14 01:45:31 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Thu Jun 14 01:45:31 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 240.0709323500288 } dataWritten: 209716 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:31 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 233.8565055904641 } -->> { : 240.0709323500288 }
m30001| Thu Jun 14 01:45:31 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, from: "shard0001", splitKeys: [ { a: 236.7690508533622 } ], shardId: "test.foo-a_233.8565055904641", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:31 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:31 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a7b32a28802daeee092
m30001| Thu Jun 14 01:45:31 [conn5] splitChunk accepted at version 4|175||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:31 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:31-225", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652731952), what: "split", ns: "test.foo", details: { before: { min: { a: 233.8565055904641 }, max: { a: 240.0709323500288 }, lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 233.8565055904641 }, max: { a: 236.7690508533622 }, lastmod: Timestamp 4000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 236.7690508533622 }, max: { a: 240.0709323500288 }, lastmod: Timestamp 4000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:31 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:31 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 223 version: 4|177||4fd97a3b0d2fef4d6a507be2 based on: 4|175||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:31 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|8||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 240.0709323500288 } on: { a: 236.7690508533622 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:31 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|177, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:31 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:32 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:29510898 377ms
m30999| Thu Jun 14 01:45:32 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|83||000000000000000000000000 min: { a: 725.5771489434317 } max: { a: 729.8361633348899 } dataWritten: 210088 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:32 [conn5] request split points lookup for chunk test.foo { : 725.5771489434317 } -->> { : 729.8361633348899 }
m30999| Thu Jun 14 01:45:32 [conn] chunk not full enough to trigger auto-split { a: 728.379986898595 }
m30999| Thu Jun 14 01:45:32 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:32 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:32 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:29894874 381ms
m30001| Thu Jun 14 01:45:33 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:31165755 1270ms
m30999| Thu Jun 14 01:45:33 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|125||000000000000000000000000 min: { a: 501.5945768521381 } max: { a: 506.5947777056855 } dataWritten: 209953 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:33 [conn5] request split points lookup for chunk test.foo { : 501.5945768521381 } -->> { : 506.5947777056855 }
m30999| Thu Jun 14 01:45:33 [conn] chunk not full enough to trigger auto-split { a: 504.2430265604527 }
m30001| Thu Jun 14 01:45:34 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:31352527 183ms
m30001| Thu Jun 14 01:45:34 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:31519520 166ms
m30001| Thu Jun 14 01:45:34 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:31717541 195ms
m30001| Thu Jun 14 01:45:34 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:31833031 112ms
m30999| Thu Jun 14 01:45:34 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|156||000000000000000000000000 min: { a: 114.9662096443472 } max: { a: 118.3157678917793 } dataWritten: 210480 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:34 [conn5] request split points lookup for chunk test.foo { : 114.9662096443472 } -->> { : 118.3157678917793 }
m30999| Thu Jun 14 01:45:34 [conn] chunk not full enough to trigger auto-split { a: 118.2053820853034 }
m30001| Thu Jun 14 01:45:35 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:32166628 312ms
m30001| Thu Jun 14 01:45:35 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:32527109 333ms
m30001| Thu Jun 14 01:45:35 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:32778364 251ms
m30001| Thu Jun 14 01:45:35 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:33075032 294ms
m30001| Thu Jun 14 01:45:36 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:33191886 114ms
m30001| Thu Jun 14 01:45:36 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:33308800 114ms
m30001| Thu Jun 14 01:45:36 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:34020243 711ms
m30001| Thu Jun 14 01:45:37 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:34248454 225ms
m30001| Thu Jun 14 01:45:37 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:34421774 173ms
m30999| Thu Jun 14 01:45:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|165||000000000000000000000000 min: { a: 191.5307698720086 } max: { a: 194.8927257678023 } dataWritten: 210052 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:37 [conn5] request split points lookup for chunk test.foo { : 191.5307698720086 } -->> { : 194.8927257678023 }
m30999| Thu Jun 14 01:45:37 [conn] chunk not full enough to trigger auto-split { a: 194.4430650694968 }
m30001| Thu Jun 14 01:45:37 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:34742491 318ms
m30999| Thu Jun 14 01:45:37 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|65||000000000000000000000000 min: { a: 708.8986861220777 } max: { a: 714.0536251380356 } dataWritten: 210761 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:37 [conn5] request split points lookup for chunk test.foo { : 708.8986861220777 } -->> { : 714.0536251380356 }
m30999| Thu Jun 14 01:45:37 [conn] chunk not full enough to trigger auto-split { a: 711.7774667509166 }
m30001| Thu Jun 14 01:45:37 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:35114699 366ms
m30001| Thu Jun 14 01:45:38 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:35358686 241ms
m30001| Thu Jun 14 01:45:38 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:35581821 220ms
m30001| Thu Jun 14 01:45:38 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:35930068 345ms
m30001| Thu Jun 14 01:45:39 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:36128506 195ms
m30001| Thu Jun 14 01:45:39 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:36465419 334ms
m30001| Thu Jun 14 01:45:39 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:36675519 207ms
m30999| Thu Jun 14 01:45:39 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|36||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 510.639225969218 } dataWritten: 210674 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:39 [conn5] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 510.639225969218 }
m30999| Thu Jun 14 01:45:39 [conn] chunk not full enough to trigger auto-split { a: 509.3203103528617 }
m30001| Thu Jun 14 01:45:39 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:36859023 180ms
m30001| Thu Jun 14 01:45:40 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:37330424 471ms
m30001| Thu Jun 14 01:45:40 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:37541854 208ms
m30001| Thu Jun 14 01:45:40 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:37796184 251ms
m30001| Thu Jun 14 01:45:41 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:38268300 471ms
m30001| Thu Jun 14 01:45:41 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:38749559 479ms
m30999| Thu Jun 14 01:45:41 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|131||000000000000000000000000 min: { a: 575.2102660145707 } max: { a: 580.4600029065366 } dataWritten: 209897 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:41 [conn5] request split points lookup for chunk test.foo { : 575.2102660145707 } -->> { : 580.4600029065366 }
m30999| Thu Jun 14 01:45:41 [conn] chunk not full enough to trigger auto-split { a: 578.3289020088243 }
m30001| Thu Jun 14 01:45:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:39268635 475ms
m30001| Thu Jun 14 01:45:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:39394333 125ms
m30001| Thu Jun 14 01:45:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:39717483 320ms
m30001| Thu Jun 14 01:45:42 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:39867777 147ms
m30001| Thu Jun 14 01:45:43 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:40129830 261ms
m30999| Thu Jun 14 01:45:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } dataWritten: 210648 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:43 [conn5] request split points lookup for chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:45:43 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 821.178966084225 } -->> { : 827.5642418995561 }
m30001| Thu Jun 14 01:45:43 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, from: "shard0001", splitKeys: [ { a: 824.2680954051706 } ], shardId: "test.foo-a_821.178966084225", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:43 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:43 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8732a28802daeee093
m30001| Thu Jun 14 01:45:43 [conn5] splitChunk accepted at version 4|177||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:43 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:43-226", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652743050), what: "split", ns: "test.foo", details: { before: { min: { a: 821.178966084225 }, max: { a: 827.5642418995561 }, lastmod: Timestamp 2000|61, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 821.178966084225 }, max: { a: 824.2680954051706 }, lastmod: Timestamp 4000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 824.2680954051706 }, max: { a: 827.5642418995561 }, lastmod: Timestamp 4000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:43 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:43 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 224 version: 4|179||4fd97a3b0d2fef4d6a507be2 based on: 4|177||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:43 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|61||000000000000000000000000 min: { a: 821.178966084225 } max: { a: 827.5642418995561 } on: { a: 824.2680954051706 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:43 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|179, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:43 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:43 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:43 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:43 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:40465999 333ms
m30999| Thu Jun 14 01:45:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|131||000000000000000000000000 min: { a: 575.2102660145707 } max: { a: 580.4600029065366 } dataWritten: 209759 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:43 [conn5] request split points lookup for chunk test.foo { : 575.2102660145707 } -->> { : 580.4600029065366 }
m30999| Thu Jun 14 01:45:43 [conn] chunk not full enough to trigger auto-split { a: 578.3289020088243 }
m30001| Thu Jun 14 01:45:43 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:40821933 343ms
m30999| Thu Jun 14 01:45:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|53||000000000000000000000000 min: { a: 521.3538677091974 } max: { a: 526.919018850918 } dataWritten: 210695 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:43 [conn5] request split points lookup for chunk test.foo { : 521.3538677091974 } -->> { : 526.919018850918 }
m30999| Thu Jun 14 01:45:43 [conn] chunk not full enough to trigger auto-split { a: 524.1294276544279 }
m30999| Thu Jun 14 01:45:43 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|26||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 373.3849373054079 } dataWritten: 210093 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:43 [conn5] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 373.3849373054079 }
m30999| Thu Jun 14 01:45:43 [conn] chunk not full enough to trigger auto-split { a: 371.9029796352952 }
m30001| Thu Jun 14 01:45:43 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:41062000 237ms
m30001| Thu Jun 14 01:45:44 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:41172758 108ms
m30999| Thu Jun 14 01:45:44 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|47||000000000000000000000000 min: { a: 131.8115136015859 } max: { a: 136.5735165062921 } dataWritten: 210120 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:44 [conn5] request split points lookup for chunk test.foo { : 131.8115136015859 } -->> { : 136.5735165062921 }
m30999| Thu Jun 14 01:45:44 [conn] chunk not full enough to trigger auto-split { a: 134.7154415419283 }
m30999| Thu Jun 14 01:45:44 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|91||000000000000000000000000 min: { a: 594.3878051880898 } max: { a: 599.2155367136296 } dataWritten: 210043 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:44 [conn5] request split points lookup for chunk test.foo { : 594.3878051880898 } -->> { : 599.2155367136296 }
m30999| Thu Jun 14 01:45:44 [conn] chunk not full enough to trigger auto-split { a: 597.3766961658547 }
m30001| Thu Jun 14 01:45:44 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:41428335 252ms
m30001| Thu Jun 14 01:45:44 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:42003074 574ms
m30999| Thu Jun 14 01:45:44 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|69||000000000000000000000000 min: { a: 584.4225320226172 } max: { a: 590.8997745355827 } dataWritten: 210114 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:44 [conn5] request split points lookup for chunk test.foo { : 584.4225320226172 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:45:44 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 584.4225320226172 } -->> { : 590.8997745355827 }
m30001| Thu Jun 14 01:45:44 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 584.4225320226172 }, max: { a: 590.8997745355827 }, from: "shard0001", splitKeys: [ { a: 587.1685851091131 } ], shardId: "test.foo-a_584.4225320226172", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:44 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:44 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8832a28802daeee094
m30001| Thu Jun 14 01:45:44 [conn5] splitChunk accepted at version 4|179||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:44 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:44-227", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652744951), what: "split", ns: "test.foo", details: { before: { min: { a: 584.4225320226172 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 4000|69, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 584.4225320226172 }, max: { a: 587.1685851091131 }, lastmod: Timestamp 4000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 587.1685851091131 }, max: { a: 590.8997745355827 }, lastmod: Timestamp 4000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:44 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:44 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 225 version: 4|181||4fd97a3b0d2fef4d6a507be2 based on: 4|179||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:44 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|69||000000000000000000000000 min: { a: 584.4225320226172 } max: { a: 590.8997745355827 } on: { a: 587.1685851091131 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|181, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:44 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|27||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 563.897889911273 } dataWritten: 210777 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:44 [conn5] request split points lookup for chunk test.foo { : 558.0115575910545 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:45:44 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 558.0115575910545 } -->> { : 563.897889911273 }
m30001| Thu Jun 14 01:45:44 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, from: "shard0001", splitKeys: [ { a: 560.838593433049 } ], shardId: "test.foo-a_558.0115575910545", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:44 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:44 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8832a28802daeee095
m30001| Thu Jun 14 01:45:44 [conn5] splitChunk accepted at version 4|181||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:44 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:44-228", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652744962), what: "split", ns: "test.foo", details: { before: { min: { a: 558.0115575910545 }, max: { a: 563.897889911273 }, lastmod: Timestamp 2000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 558.0115575910545 }, max: { a: 560.838593433049 }, lastmod: Timestamp 4000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 560.838593433049 }, max: { a: 563.897889911273 }, lastmod: Timestamp 4000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:44 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:44 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 226 version: 4|183||4fd97a3b0d2fef4d6a507be2 based on: 4|181||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:44 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|27||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 563.897889911273 } on: { a: 560.838593433049 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|183, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:44 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:45 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:42285200 279ms
m30001| Thu Jun 14 01:45:45 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:42655756 292ms
m30999| Thu Jun 14 01:45:45 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|64||000000000000000000000000 min: { a: 703.7520953686671 } max: { a: 708.8986861220777 } dataWritten: 210536 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:45 [conn5] request split points lookup for chunk test.foo { : 703.7520953686671 } -->> { : 708.8986861220777 }
m30999| Thu Jun 14 01:45:45 [conn] chunk not full enough to trigger auto-split { a: 706.6675825931287 }
m30001| Thu Jun 14 01:45:46 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:43131518 372ms
m30001| Thu Jun 14 01:45:46 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:43334197 200ms
m30001| Thu Jun 14 01:45:46 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:43508332 171ms
m30001| Thu Jun 14 01:45:46 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:43909320 398ms
m30001| Thu Jun 14 01:45:47 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:44043986 132ms
m30001| Thu Jun 14 01:45:47 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:44166877 122ms
m30001| Thu Jun 14 01:45:47 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:44424402 207ms
m30001| Thu Jun 14 01:45:47 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:44637448 210ms
m30999| Thu Jun 14 01:45:47 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|162||000000000000000000000000 min: { a: 848.2332478721062 } max: { a: 851.468355264985 } dataWritten: 209988 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:47 [conn5] request split points lookup for chunk test.foo { : 848.2332478721062 } -->> { : 851.468355264985 }
m30999| Thu Jun 14 01:45:47 [conn] chunk not full enough to trigger auto-split { a: 851.3181002529054 }
m30001| Thu Jun 14 01:45:47 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:44741136 101ms
m30001| Thu Jun 14 01:45:48 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:45083447 328ms
m30001| Thu Jun 14 01:45:48 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:45413540 327ms
m30001| Thu Jun 14 01:45:48 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:45669791 253ms
m30001| Thu Jun 14 01:45:48 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:45897158 224ms
m30001| Thu Jun 14 01:45:49 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:46072283 172ms
m30999| Thu Jun 14 01:45:49 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|60||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 821.178966084225 } dataWritten: 210023 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:49 [conn5] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 821.178966084225 }
m30999| Thu Jun 14 01:45:49 [conn] chunk not full enough to trigger auto-split { a: 818.7207194054589 }
m30001| Thu Jun 14 01:45:49 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:46467323 392ms
m30999| Thu Jun 14 01:45:49 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|155||000000000000000000000000 min: { a: 184.9464054233513 } max: { a: 188.6698238706465 } dataWritten: 210753 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:49 [conn5] request split points lookup for chunk test.foo { : 184.9464054233513 } -->> { : 188.6698238706465 }
m30999| Thu Jun 14 01:45:49 [conn] chunk not full enough to trigger auto-split { a: 187.9933566027733 }
m30001| Thu Jun 14 01:45:49 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:46620473 150ms
m30999| Thu Jun 14 01:45:49 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|130||000000000000000000000000 min: { a: 571.914212129846 } max: { a: 575.2102660145707 } dataWritten: 209981 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:49 [conn5] request split points lookup for chunk test.foo { : 571.914212129846 } -->> { : 575.2102660145707 }
m30999| Thu Jun 14 01:45:49 [conn] chunk not full enough to trigger auto-split { a: 574.894010499331 }
m30001| Thu Jun 14 01:45:49 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:46775570 152ms
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:46981702 201ms
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:47139517 155ms
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:47357694 215ms
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:47548032 188ms
m30999| Thu Jun 14 01:45:50 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } dataWritten: 210143 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:50 [conn5] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:45:50 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 204.0577089538382 } -->> { : 209.8684815227433 }
m30001| Thu Jun 14 01:45:50 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, from: "shard0001", splitKeys: [ { a: 207.0875453859469 } ], shardId: "test.foo-a_204.0577089538382", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:50 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:50 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8e32a28802daeee096
m30001| Thu Jun 14 01:45:50 [conn5] splitChunk accepted at version 4|183||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:50 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:50-229", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652750579), what: "split", ns: "test.foo", details: { before: { min: { a: 204.0577089538382 }, max: { a: 209.8684815227433 }, lastmod: Timestamp 2000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 204.0577089538382 }, max: { a: 207.0875453859469 }, lastmod: Timestamp 4000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 207.0875453859469 }, max: { a: 209.8684815227433 }, lastmod: Timestamp 4000|185, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:50 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:50 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 227 version: 4|185||4fd97a3b0d2fef4d6a507be2 based on: 4|183||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:50 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|28||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 209.8684815227433 } on: { a: 207.0875453859469 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:50 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|185, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:50 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:50 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:50 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:47684524 133ms
m30999| Thu Jun 14 01:45:50 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|60||000000000000000000000000 min: { a: 815.7684070742035 } max: { a: 821.178966084225 } dataWritten: 210131 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:50 [conn5] request split points lookup for chunk test.foo { : 815.7684070742035 } -->> { : 821.178966084225 }
m30999| Thu Jun 14 01:45:50 [conn] chunk not full enough to trigger auto-split { a: 818.7101407156015 }
m30001| Thu Jun 14 01:45:50 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:47939045 252ms
m30001| Thu Jun 14 01:45:51 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:48068844 125ms
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|181||000000000000000000000000 min: { a: 587.1685851091131 } max: { a: 590.8997745355827 } dataWritten: 209881 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 587.1685851091131 } -->> { : 590.8997745355827 }
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 590.1702012144566 }
m30001| Thu Jun 14 01:45:51 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:48381183 225ms
m30001| Thu Jun 14 01:45:51 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.6, size: 511MB, took 25.924 secs
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|24||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 315.9151551096841 } dataWritten: 210277 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 309.3101713472285 } -->> { : 315.9151551096841 }
m30001| Thu Jun 14 01:45:51 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 309.3101713472285 } -->> { : 315.9151551096841 }
m30001| Thu Jun 14 01:45:51 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, from: "shard0001", splitKeys: [ { a: 312.3135459595852 } ], shardId: "test.foo-a_309.3101713472285", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:51 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:51 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8f32a28802daeee097
m30001| Thu Jun 14 01:45:51 [conn5] splitChunk accepted at version 4|185||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:51 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:51-230", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652751518), what: "split", ns: "test.foo", details: { before: { min: { a: 309.3101713472285 }, max: { a: 315.9151551096841 }, lastmod: Timestamp 2000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 309.3101713472285 }, max: { a: 312.3135459595852 }, lastmod: Timestamp 4000|186, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 312.3135459595852 }, max: { a: 315.9151551096841 }, lastmod: Timestamp 4000|187, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:51 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:51 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 228 version: 4|187||4fd97a3b0d2fef4d6a507be2 based on: 4|185||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:51 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|24||000000000000000000000000 min: { a: 309.3101713472285 } max: { a: 315.9151551096841 } on: { a: 312.3135459595852 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|187, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 181.7281932506388 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|32||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 181.7281932506388 } dataWritten: 209908 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 178.6825691127192 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 951.1531632632295 } -->> { : 955.9182567868356 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|153||000000000000000000000000 min: { a: 951.1531632632295 } max: { a: 955.9182567868356 } dataWritten: 209894 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 953.9801682552009 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 510.639225969218 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|36||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 510.639225969218 } dataWritten: 210714 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 509.2800767312591 }
m30001| Thu Jun 14 01:45:51 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:48700349 197ms
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 536.0462960134931 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:51 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 536.0462960134931 } -->> { : 542.4296058071777 }
m30001| Thu Jun 14 01:45:51 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 536.0462960134931 }, max: { a: 542.4296058071777 }, from: "shard0001", splitKeys: [ { a: 539.1281234038355 } ], shardId: "test.foo-a_536.0462960134931", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:51 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:51 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a8f32a28802daeee098
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|23||000000000000000000000000 min: { a: 536.0462960134931 } max: { a: 542.4296058071777 } dataWritten: 210386 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:51 [conn5] splitChunk accepted at version 4|187||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:51 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:51-231", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652751851), what: "split", ns: "test.foo", details: { before: { min: { a: 536.0462960134931 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 4000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 536.0462960134931 }, max: { a: 539.1281234038355 }, lastmod: Timestamp 4000|188, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 539.1281234038355 }, max: { a: 542.4296058071777 }, lastmod: Timestamp 4000|189, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:51 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:51 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 229 version: 4|189||4fd97a3b0d2fef4d6a507be2 based on: 4|187||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:51 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|23||000000000000000000000000 min: { a: 536.0462960134931 } max: { a: 542.4296058071777 } on: { a: 539.1281234038355 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|189, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 236.7690508533622 } -->> { : 240.0709323500288 }
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|177||000000000000000000000000 min: { a: 236.7690508533622 } max: { a: 240.0709323500288 } dataWritten: 210315 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 239.5950509251964 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 668.6362621623331 } -->> { : 672.2870891659105 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|88||000000000000000000000000 min: { a: 668.6362621623331 } max: { a: 672.2870891659105 } dataWritten: 210482 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 671.5523583534341 }
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:51 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 868.5788679342879 } -->> { : 873.8718881199745 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|58||000000000000000000000000 min: { a: 868.5788679342879 } max: { a: 873.8718881199745 } dataWritten: 209962 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 871.4574403507534 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 344.8762285660836 } -->> { : 349.1094580993942 }
m30001| Thu Jun 14 01:45:51 [conn5] request split points lookup for chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|48||000000000000000000000000 min: { a: 344.8762285660836 } max: { a: 349.1094580993942 } dataWritten: 210363 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 347.8317801202912 }
m30999| Thu Jun 14 01:45:51 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } dataWritten: 210538 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:51 [conn] chunk not full enough to trigger auto-split { a: 89.91670593889378 }
m30001| Thu Jun 14 01:45:52 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:48893878 117ms
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 280.6827052136106 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 141.1884883168546 } -->> { : 146.6503611644078 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|132||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 280.6827052136106 } dataWritten: 210545 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 280.1907031389082 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|9||000000000000000000000000 min: { a: 141.1884883168546 } max: { a: 146.6503611644078 } dataWritten: 210634 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 144.046978022784 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 430.2130944220548 } -->> { : 433.3806610330477 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|161||000000000000000000000000 min: { a: 430.2130944220548 } max: { a: 433.3806610330477 } dataWritten: 210227 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 433.074955841252 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 891.8750702869381 } -->> { : 898.6566515076229 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 891.8750702869381 } -->> { : 898.6566515076229 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, from: "shard0001", splitKeys: [ { a: 894.8106130543974 } ], shardId: "test.foo-a_891.8750702869381", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee099
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 898.6566515076229 } dataWritten: 210554 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|189||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-232", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752135), what: "split", ns: "test.foo", details: { before: { min: { a: 891.8750702869381 }, max: { a: 898.6566515076229 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 891.8750702869381 }, max: { a: 894.8106130543974 }, lastmod: Timestamp 4000|190, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 894.8106130543974 }, max: { a: 898.6566515076229 }, lastmod: Timestamp 4000|191, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 230 version: 4|191||4fd97a3b0d2fef4d6a507be2 based on: 4|189||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|4||000000000000000000000000 min: { a: 891.8750702869381 } max: { a: 898.6566515076229 } on: { a: 894.8106130543974 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|191, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 738.6198156338151 } -->> { : 744.9210849408088 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 738.6198156338151 } -->> { : 744.9210849408088 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, from: "shard0001", splitKeys: [ { a: 741.3245176669844 } ], shardId: "test.foo-a_738.6198156338151", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09a
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|12||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 744.9210849408088 } dataWritten: 210435 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|191||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-233", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752154), what: "split", ns: "test.foo", details: { before: { min: { a: 738.6198156338151 }, max: { a: 744.9210849408088 }, lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 738.6198156338151 }, max: { a: 741.3245176669844 }, lastmod: Timestamp 4000|192, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 741.3245176669844 }, max: { a: 744.9210849408088 }, lastmod: Timestamp 4000|193, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 231 version: 4|193||4fd97a3b0d2fef4d6a507be2 based on: 4|191||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|12||000000000000000000000000 min: { a: 738.6198156338151 } max: { a: 744.9210849408088 } on: { a: 741.3245176669844 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|193, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 970.39026226179 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 970.39026226179 } -->> { : 977.1164746659301 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, from: "shard0001", splitKeys: [ { a: 973.4895868865218 } ], shardId: "test.foo-a_970.39026226179", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09b
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|45||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 977.1164746659301 } dataWritten: 210683 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|193||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-234", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752318), what: "split", ns: "test.foo", details: { before: { min: { a: 970.39026226179 }, max: { a: 977.1164746659301 }, lastmod: Timestamp 2000|45, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 970.39026226179 }, max: { a: 973.4895868865218 }, lastmod: Timestamp 4000|194, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 973.4895868865218 }, max: { a: 977.1164746659301 }, lastmod: Timestamp 4000|195, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 232 version: 4|195||4fd97a3b0d2fef4d6a507be2 based on: 4|193||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|45||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 977.1164746659301 } on: { a: 973.4895868865218 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|195, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 87.41840730135154 } -->> { : 92.91917824556573 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 87.41840730135154 }, max: { a: 92.91917824556573 }, from: "shard0001", splitKeys: [ { a: 89.89791872458619 } ], shardId: "test.foo-a_87.41840730135154", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09c
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } dataWritten: 210529 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|195||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-235", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752334), what: "split", ns: "test.foo", details: { before: { min: { a: 87.41840730135154 }, max: { a: 92.91917824556573 }, lastmod: Timestamp 4000|121, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 87.41840730135154 }, max: { a: 89.89791872458619 }, lastmod: Timestamp 4000|196, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 89.89791872458619 }, max: { a: 92.91917824556573 }, lastmod: Timestamp 4000|197, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 233 version: 4|197||4fd97a3b0d2fef4d6a507be2 based on: 4|195||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|121||000000000000000000000000 min: { a: 87.41840730135154 } max: { a: 92.91917824556573 } on: { a: 89.89791872458619 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|197, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 204.0577089538382 } -->> { : 207.0875453859469 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|184||000000000000000000000000 min: { a: 204.0577089538382 } max: { a: 207.0875453859469 } dataWritten: 210288 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 207.0369898408363 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|68||000000000000000000000000 min: { a: 580.4600029065366 } max: { a: 584.4225320226172 } dataWritten: 210714 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 583.2279943859203 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 580.4600029065366 } -->> { : 584.4225320226172 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 628.1995001147562 } -->> { : 632.4786347534061 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|14||000000000000000000000000 min: { a: 628.1995001147562 } max: { a: 632.4786347534061 } dataWritten: 209813 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 630.9348775570131 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 721.9923962351373 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 714.0536251380356 } -->> { : 721.9923962351373 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, from: "shard0001", splitKeys: [ { a: 717.0859810000978 } ], shardId: "test.foo-a_714.0536251380356", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09d
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|167||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 721.9923962351373 } dataWritten: 209924 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|197||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-236", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752458), what: "split", ns: "test.foo", details: { before: { min: { a: 714.0536251380356 }, max: { a: 721.9923962351373 }, lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 714.0536251380356 }, max: { a: 717.0859810000978 }, lastmod: Timestamp 4000|198, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 717.0859810000978 }, max: { a: 721.9923962351373 }, lastmod: Timestamp 4000|199, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 234 version: 4|199||4fd97a3b0d2fef4d6a507be2 based on: 4|197||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|167||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 721.9923962351373 } on: { a: 717.0859810000978 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|199, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 407.0796926580036 } -->> { : 411.0287894698923 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|167||000000000000000000000000 min: { a: 407.0796926580036 } max: { a: 411.0287894698923 } dataWritten: 209887 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 410.2455810338489 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 191.5307698720086 } -->> { : 194.8927257678023 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|165||000000000000000000000000 min: { a: 191.5307698720086 } max: { a: 194.8927257678023 } dataWritten: 210223 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 194.3443482868258 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 790.298943411581 } -->> { : 793.7120312511385 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|114||000000000000000000000000 min: { a: 790.298943411581 } max: { a: 793.7120312511385 } dataWritten: 209782 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 793.0697657256077 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 914.1361338478089 } -->> { : 918.4259760765641 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|147||000000000000000000000000 min: { a: 914.1361338478089 } max: { a: 918.4259760765641 } dataWritten: 210339 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 916.8950008953375 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 843.8858257205128 } -->> { : 848.2332478721062 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|123||000000000000000000000000 min: { a: 843.8858257205128 } max: { a: 848.2332478721062 } dataWritten: 210659 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 846.7927667001376 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 784.2714953599016 } -->> { : 790.298943411581 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 784.2714953599016 } -->> { : 790.298943411581 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, from: "shard0001", splitKeys: [ { a: 787.2181223195419 } ], shardId: "test.foo-a_784.2714953599016", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09e
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|199||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-237", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752686), what: "split", ns: "test.foo", details: { before: { min: { a: 784.2714953599016 }, max: { a: 790.298943411581 }, lastmod: Timestamp 2000|42, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 784.2714953599016 }, max: { a: 787.2181223195419 }, lastmod: Timestamp 4000|200, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 787.2181223195419 }, max: { a: 790.298943411581 }, lastmod: Timestamp 4000|201, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|42||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 790.298943411581 } dataWritten: 210698 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 235 version: 4|201||4fd97a3b0d2fef4d6a507be2 based on: 4|199||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|42||000000000000000000000000 min: { a: 784.2714953599016 } max: { a: 790.298943411581 } on: { a: 787.2181223195419 } (splitThreshold 1048576)
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 369.0981926515277 } -->> { : 373.3849373054079 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 864.7746195980726 } -->> { : 868.5788679342879 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|201, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|26||000000000000000000000000 min: { a: 369.0981926515277 } max: { a: 373.3849373054079 } dataWritten: 210608 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 371.8414585785561 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|173||000000000000000000000000 min: { a: 864.7746195980726 } max: { a: 868.5788679342879 } dataWritten: 210096 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 867.5263071677014 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 437.040103636678 } -->> { : 441.0435238853461 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|75||000000000000000000000000 min: { a: 437.040103636678 } max: { a: 441.0435238853461 } dataWritten: 210745 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 439.9081070444878 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 732.9348251743502 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 732.9348251743502 } -->> { : 738.6198156338151 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 732.9348251743502 }, max: { a: 738.6198156338151 }, from: "shard0001", splitKeys: [ { a: 735.4457009121708 } ], shardId: "test.foo-a_732.9348251743502", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee09f
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|201||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-238", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752738), what: "split", ns: "test.foo", details: { before: { min: { a: 732.9348251743502 }, max: { a: 738.6198156338151 }, lastmod: Timestamp 4000|145, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 732.9348251743502 }, max: { a: 735.4457009121708 }, lastmod: Timestamp 4000|202, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 735.4457009121708 }, max: { a: 738.6198156338151 }, lastmod: Timestamp 4000|203, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 943.2489828660326 } -->> { : 948.0165404542549 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|145||000000000000000000000000 min: { a: 732.9348251743502 } max: { a: 738.6198156338151 } dataWritten: 210044 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 236 version: 4|203||4fd97a3b0d2fef4d6a507be2 based on: 4|201||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|145||000000000000000000000000 min: { a: 732.9348251743502 } max: { a: 738.6198156338151 } on: { a: 735.4457009121708 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|203, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|71||000000000000000000000000 min: { a: 943.2489828660326 } max: { a: 948.0165404542549 } dataWritten: 210779 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 946.0100381573749 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 864.7746195980726 } -->> { : 868.5788679342879 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|173||000000000000000000000000 min: { a: 864.7746195980726 } max: { a: 868.5788679342879 } dataWritten: 210610 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 867.5198437823263 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 209.8684815227433 } -->> { : 216.8904302452864 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, from: "shard0001", splitKeys: [ { a: 212.8104857756458 } ], shardId: "test.foo-a_209.8684815227433", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee0a0
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|203||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-239", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752778), what: "split", ns: "test.foo", details: { before: { min: { a: 209.8684815227433 }, max: { a: 216.8904302452864 }, lastmod: Timestamp 2000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 209.8684815227433 }, max: { a: 212.8104857756458 }, lastmod: Timestamp 4000|204, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 212.8104857756458 }, max: { a: 216.8904302452864 }, lastmod: Timestamp 4000|205, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } dataWritten: 210383 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 237 version: 4|205||4fd97a3b0d2fef4d6a507be2 based on: 4|203||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|29||000000000000000000000000 min: { a: 209.8684815227433 } max: { a: 216.8904302452864 } on: { a: 212.8104857756458 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|205, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 752.6019558395919 } -->> { : 756.637103632288 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|54||000000000000000000000000 min: { a: 752.6019558395919 } max: { a: 756.637103632288 } dataWritten: 210075 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 755.4594161126889 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 640.7093733209429 } -->> { : 644.4017960752651 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|84||000000000000000000000000 min: { a: 640.7093733209429 } max: { a: 644.4017960752651 } dataWritten: 210517 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 643.4465005189136 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 594.3878051880898 } -->> { : 599.2155367136296 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|91||000000000000000000000000 min: { a: 594.3878051880898 } max: { a: 599.2155367136296 } dataWritten: 210148 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 597.2217826169958 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 536.0462960134931 } -->> { : 539.1281234038355 }
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|188||000000000000000000000000 min: { a: 536.0462960134931 } max: { a: 539.1281234038355 } dataWritten: 210519 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] chunk not full enough to trigger auto-split { a: 539.0582808698662 }
m30001| Thu Jun 14 01:45:52 [conn5] request split points lookup for chunk test.foo { : 672.2870891659105 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:52 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 672.2870891659105 } -->> { : 678.3563510786536 }
m30001| Thu Jun 14 01:45:52 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 672.2870891659105 }, max: { a: 678.3563510786536 }, from: "shard0001", splitKeys: [ { a: 675.1811603867598 } ], shardId: "test.foo-a_672.2870891659105", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:52 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9032a28802daeee0a1
m30001| Thu Jun 14 01:45:52 [conn5] splitChunk accepted at version 4|205||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:52 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:52-240", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652752980), what: "split", ns: "test.foo", details: { before: { min: { a: 672.2870891659105 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 4000|89, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 672.2870891659105 }, max: { a: 675.1811603867598 }, lastmod: Timestamp 4000|206, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 675.1811603867598 }, max: { a: 678.3563510786536 }, lastmod: Timestamp 4000|207, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:52 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:52 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|89||000000000000000000000000 min: { a: 672.2870891659105 } max: { a: 678.3563510786536 } dataWritten: 210280 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:52 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 238 version: 4|207||4fd97a3b0d2fef4d6a507be2 based on: 4|205||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:52 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|89||000000000000000000000000 min: { a: 672.2870891659105 } max: { a: 678.3563510786536 } on: { a: 675.1811603867598 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|207, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:52 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 594.3878051880898 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|90||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 594.3878051880898 } dataWritten: 209892 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 593.497460874808 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 741.3245176669844 } -->> { : 744.9210849408088 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|193||000000000000000000000000 min: { a: 741.3245176669844 } max: { a: 744.9210849408088 } dataWritten: 209977 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 744.1751437147549 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 51.90923851177054 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|102||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 51.90923851177054 } dataWritten: 210372 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 50.79944260626134 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 114.9662096443472 } -->> { : 118.3157678917793 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 277.1560315461681 } -->> { : 280.6827052136106 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|156||000000000000000000000000 min: { a: 114.9662096443472 } max: { a: 118.3157678917793 } dataWritten: 209861 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 118.0197314945662 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|132||000000000000000000000000 min: { a: 277.1560315461681 } max: { a: 280.6827052136106 } dataWritten: 210056 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 279.9725659724359 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 209803 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 360.944539039734 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 493.6797279933101 } -->> { : 498.2021416153332 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|111||000000000000000000000000 min: { a: 493.6797279933101 } max: { a: 498.2021416153332 } dataWritten: 210295 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 496.3490541247912 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 717.0859810000978 } -->> { : 721.9923962351373 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|199||000000000000000000000000 min: { a: 717.0859810000978 } max: { a: 721.9923962351373 } dataWritten: 209873 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 719.7882133335951 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 392.8718206829087 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, from: "shard0001", splitKeys: [ { a: 395.6502767966605 } ], shardId: "test.foo-a_392.8718206829087", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a2
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|207||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-241", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753235), what: "split", ns: "test.foo", details: { before: { min: { a: 392.8718206829087 }, max: { a: 400.6101810646703 }, lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 392.8718206829087 }, max: { a: 395.6502767966605 }, lastmod: Timestamp 4000|208, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 395.6502767966605 }, max: { a: 400.6101810646703 }, lastmod: Timestamp 4000|209, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 61.76919454003927 } -->> { : 66.37486853611429 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } dataWritten: 209753 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 240.0709323500288 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 240.0709323500288 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 242.6421093833427 } ], shardId: "test.foo-a_240.0709323500288", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a3
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|209||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-242", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753254), what: "split", ns: "test.foo", details: { before: { min: { a: 240.0709323500288 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 240.0709323500288 }, max: { a: 242.6421093833427 }, lastmod: Timestamp 4000|210, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 242.6421093833427 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 4000|211, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 239 version: 4|209||4fd97a3b0d2fef4d6a507be2 based on: 4|207||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|169||000000000000000000000000 min: { a: 392.8718206829087 } max: { a: 400.6101810646703 } on: { a: 395.6502767966605 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|209, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|35||000000000000000000000000 min: { a: 61.76919454003927 } max: { a: 66.37486853611429 } dataWritten: 210535 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 64.61811390064443 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { a: 240.0709323500288 } max: { a: 248.3080159156712 } dataWritten: 209729 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 240 version: 4|211||4fd97a3b0d2fef4d6a507be2 based on: 4|209||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|9||000000000000000000000000 min: { a: 240.0709323500288 } max: { a: 248.3080159156712 } on: { a: 242.6421093833427 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|211, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 337.6965417950217 } -->> { : 344.8762285660836 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, from: "shard0001", splitKeys: [ { a: 340.4008653065953 } ], shardId: "test.foo-a_337.6965417950217", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a4
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|211||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-243", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753277), what: "split", ns: "test.foo", details: { before: { min: { a: 337.6965417950217 }, max: { a: 344.8762285660836 }, lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 337.6965417950217 }, max: { a: 340.4008653065953 }, lastmod: Timestamp 4000|212, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 340.4008653065953 }, max: { a: 344.8762285660836 }, lastmod: Timestamp 4000|213, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 652.9401841699823 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } dataWritten: 210721 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 241 version: 4|213||4fd97a3b0d2fef4d6a507be2 based on: 4|211||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|171||000000000000000000000000 min: { a: 337.6965417950217 } max: { a: 344.8762285660836 } on: { a: 340.4008653065953 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|213, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|52||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 652.9401841699823 } dataWritten: 210421 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 651.6459660673049 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 407.0796926580036 } -->> { : 411.0287894698923 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 61.76919454003927 } -->> { : 66.37486853611429 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|167||000000000000000000000000 min: { a: 407.0796926580036 } max: { a: 411.0287894698923 } dataWritten: 210421 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 410.1416729297567 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|35||000000000000000000000000 min: { a: 61.76919454003927 } max: { a: 66.37486853611429 } dataWritten: 210058 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 64.6162680715302 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 787.2181223195419 } -->> { : 790.298943411581 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|201||000000000000000000000000 min: { a: 787.2181223195419 } max: { a: 790.298943411581 } dataWritten: 209771 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 789.8910999813916 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 694.6501944983177 } -->> { : 698.4329238257609 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|106||000000000000000000000000 min: { a: 694.6501944983177 } max: { a: 698.4329238257609 } dataWritten: 210372 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 697.3357289921945 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 20.02617482801994 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 20.02617482801994 } -->> { : 25.60273139230473 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 20.02617482801994 }, max: { a: 25.60273139230473 }, from: "shard0001", splitKeys: [ { a: 22.72135361925398 } ], shardId: "test.foo-a_20.02617482801994", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a5
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|213||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-244", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753334), what: "split", ns: "test.foo", details: { before: { min: { a: 20.02617482801994 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|73, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 20.02617482801994 }, max: { a: 22.72135361925398 }, lastmod: Timestamp 4000|214, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 22.72135361925398 }, max: { a: 25.60273139230473 }, lastmod: Timestamp 4000|215, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|73||000000000000000000000000 min: { a: 20.02617482801994 } max: { a: 25.60273139230473 } dataWritten: 210435 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 242 version: 4|215||4fd97a3b0d2fef4d6a507be2 based on: 4|213||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|73||000000000000000000000000 min: { a: 20.02617482801994 } max: { a: 25.60273139230473 } on: { a: 22.72135361925398 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|215, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 315.9151551096841 } -->> { : 321.3459727153073 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|50||000000000000000000000000 min: { a: 315.9151551096841 } max: { a: 321.3459727153073 } dataWritten: 210632 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 318.8386354121041 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 331.4018789379612 } -->> { : 334.3168575448847 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|128||000000000000000000000000 min: { a: 331.4018789379612 } max: { a: 334.3168575448847 } dataWritten: 209988 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 333.8489091096695 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 873.8718881199745 } -->> { : 877.8438233640235 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|86||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 877.8438233640235 } dataWritten: 209743 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 876.6584614883969 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 833.5963963333859 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 833.5963963333859 } -->> { : 840.7121644073931 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, from: "shard0001", splitKeys: [ { a: 836.3608305125814 } ], shardId: "test.foo-a_833.5963963333859", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a6
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|215||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-245", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753425), what: "split", ns: "test.foo", details: { before: { min: { a: 833.5963963333859 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 2000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 833.5963963333859 }, max: { a: 836.3608305125814 }, lastmod: Timestamp 4000|216, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 836.3608305125814 }, max: { a: 840.7121644073931 }, lastmod: Timestamp 4000|217, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 289.7137301985317 } -->> { : 294.0222214358918 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|31||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 840.7121644073931 } dataWritten: 210538 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 243 version: 4|217||4fd97a3b0d2fef4d6a507be2 based on: 4|215||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|31||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 840.7121644073931 } on: { a: 836.3608305125814 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|217, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|21||000000000000000000000000 min: { a: 289.7137301985317 } max: { a: 294.0222214358918 } dataWritten: 210771 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 292.3373629717813 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 810.8918013325706 } -->> { : 815.7684070742035 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|119||000000000000000000000000 min: { a: 810.8918013325706 } max: { a: 815.7684070742035 } dataWritten: 209764 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 813.3645716032846 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 668.6362621623331 } -->> { : 672.2870891659105 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|88||000000000000000000000000 min: { a: 668.6362621623331 } max: { a: 672.2870891659105 } dataWritten: 210570 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 671.3498332245974 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 16.11151483141404 } -->> { : 20.02617482801994 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|72||000000000000000000000000 min: { a: 16.11151483141404 } max: { a: 20.02617482801994 } dataWritten: 210645 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 18.77932531266291 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 664.5574284897642 } -->> { : 668.6362621623331 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|38||000000000000000000000000 min: { a: 664.5574284897642 } max: { a: 668.6362621623331 } dataWritten: 210549 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 667.2238146621592 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 599.2155367136296 } -->> { : 603.53104016638 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|12||000000000000000000000000 min: { a: 599.2155367136296 } max: { a: 603.53104016638 } dataWritten: 210365 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 601.8368245771925 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 623.3985075048967 } -->> { : 628.1995001147562 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 358.3343339611492 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|6||000000000000000000000000 min: { a: 623.3985075048967 } max: { a: 628.1995001147562 } dataWritten: 210743 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 626.1878739993558 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|68||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 358.3343339611492 } dataWritten: 209811 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 355.974292196054 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 580.4600029065366 } -->> { : 584.4225320226172 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|68||000000000000000000000000 min: { a: 580.4600029065366 } max: { a: 584.4225320226172 } dataWritten: 210193 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 583.0890117513093 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 233.8565055904641 } -->> { : 236.7690508533622 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|176||000000000000000000000000 min: { a: 233.8565055904641 } max: { a: 236.7690508533622 } dataWritten: 209717 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 236.5033547836582 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 74.43717892117874 } -->> { : 78.73686651492073 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|32||000000000000000000000000 min: { a: 74.43717892117874 } max: { a: 78.73686651492073 } dataWritten: 210033 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 77.07889485535414 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 463.2766201180535 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 463.2766201180535 } -->> { : 473.1445991105042 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, from: "shard0001", splitKeys: [ { a: 466.1607312365173 } ], shardId: "test.foo-a_463.2766201180535", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a7
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|217||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-246", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753725), what: "split", ns: "test.foo", details: { before: { min: { a: 463.2766201180535 }, max: { a: 473.1445991105042 }, lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 463.2766201180535 }, max: { a: 466.1607312365173 }, lastmod: Timestamp 4000|218, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 466.1607312365173 }, max: { a: 473.1445991105042 }, lastmod: Timestamp 4000|219, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|184||000000000000000000000000 min: { a: 463.2766201180535 } max: { a: 473.1445991105042 } dataWritten: 210778 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 244 version: 4|219||4fd97a3b0d2fef4d6a507be2 based on: 4|217||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|184||000000000000000000000000 min: { a: 463.2766201180535 } max: { a: 473.1445991105042 } on: { a: 466.1607312365173 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|219, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 918.4259760765641 } -->> { : 921.5853246168082 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|126||000000000000000000000000 min: { a: 918.4259760765641 } max: { a: 921.5853246168082 } dataWritten: 210479 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 921.0624603125924 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 768.6399184840259 } -->> { : 773.3799848158397 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|141||000000000000000000000000 min: { a: 768.6399184840259 } max: { a: 773.3799848158397 } dataWritten: 210655 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 771.1914683637809 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 92.91917824556573 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 92.91917824556573 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, from: "shard0001", splitKeys: [ { a: 95.6069228239147 } ], shardId: "test.foo-a_92.91917824556573", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a8
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|219||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-247", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753846), what: "split", ns: "test.foo", details: { before: { min: { a: 92.91917824556573 }, max: { a: 101.960589257945 }, lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 92.91917824556573 }, max: { a: 95.6069228239147 }, lastmod: Timestamp 4000|220, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 95.6069228239147 }, max: { a: 101.960589257945 }, lastmod: Timestamp 4000|221, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|140||000000000000000000000000 min: { a: 92.91917824556573 } max: { a: 101.960589257945 } dataWritten: 210294 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 245 version: 4|221||4fd97a3b0d2fef4d6a507be2 based on: 4|219||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|140||000000000000000000000000 min: { a: 92.91917824556573 } max: { a: 101.960589257945 } on: { a: 95.6069228239147 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|221, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 248.3080159156712 } -->> { : 254.1395685736485 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 248.3080159156712 } -->> { : 254.1395685736485 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, from: "shard0001", splitKeys: [ { a: 250.7993295308498 } ], shardId: "test.foo-a_248.3080159156712", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0a9
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|221||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-248", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753876), what: "split", ns: "test.foo", details: { before: { min: { a: 248.3080159156712 }, max: { a: 254.1395685736485 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 248.3080159156712 }, max: { a: 250.7993295308498 }, lastmod: Timestamp 4000|222, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 250.7993295308498 }, max: { a: 254.1395685736485 }, lastmod: Timestamp 4000|223, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 254.1395685736485 } dataWritten: 209895 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 246 version: 4|223||4fd97a3b0d2fef4d6a507be2 based on: 4|221||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|2||000000000000000000000000 min: { a: 248.3080159156712 } max: { a: 254.1395685736485 } on: { a: 250.7993295308498 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|223, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|32||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 181.7281932506388 } dataWritten: 210697 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 176.0230312595962 } -->> { : 181.7281932506388 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 176.0230312595962 } -->> { : 181.7281932506388 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, from: "shard0001", splitKeys: [ { a: 178.4802269484291 } ], shardId: "test.foo-a_176.0230312595962", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0aa
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|223||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-249", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753890), what: "split", ns: "test.foo", details: { before: { min: { a: 176.0230312595962 }, max: { a: 181.7281932506388 }, lastmod: Timestamp 2000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 176.0230312595962 }, max: { a: 178.4802269484291 }, lastmod: Timestamp 4000|224, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 178.4802269484291 }, max: { a: 181.7281932506388 }, lastmod: Timestamp 4000|225, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 247 version: 4|225||4fd97a3b0d2fef4d6a507be2 based on: 4|223||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|32||000000000000000000000000 min: { a: 176.0230312595962 } max: { a: 181.7281932506388 } on: { a: 178.4802269484291 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|225, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|16||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 777.6503149863191 } dataWritten: 210589 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 777.6503149863191 }
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 775.8862054362851 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 558.0115575910545 } -->> { : 560.838593433049 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 57.56464668319472 } -->> { : 61.76919454003927 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|182||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 560.838593433049 } dataWritten: 210109 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 560.5746471892011 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|34||000000000000000000000000 min: { a: 57.56464668319472 } max: { a: 61.76919454003927 } dataWritten: 210218 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 60.13862849682317 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 980.667776515926 } -->> { : 985.6773819217475 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|97||000000000000000000000000 min: { a: 980.667776515926 } max: { a: 985.6773819217475 } dataWritten: 210169 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 983.5011352956335 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 914.1361338478089 } -->> { : 918.4259760765641 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|147||000000000000000000000000 min: { a: 914.1361338478089 } max: { a: 918.4259760765641 } dataWritten: 210265 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 916.7718573960079 }
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 167.6382092456179 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:45:53 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 167.6382092456179 } -->> { : 176.0230312595962 }
m30001| Thu Jun 14 01:45:53 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, from: "shard0001", splitKeys: [ { a: 170.2748683082939 } ], shardId: "test.foo-a_167.6382092456179", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:53 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9132a28802daeee0ab
m30001| Thu Jun 14 01:45:53 [conn5] splitChunk accepted at version 4|225||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:53 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:53-250", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652753969), what: "split", ns: "test.foo", details: { before: { min: { a: 167.6382092456179 }, max: { a: 176.0230312595962 }, lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 167.6382092456179 }, max: { a: 170.2748683082939 }, lastmod: Timestamp 4000|226, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 170.2748683082939 }, max: { a: 176.0230312595962 }, lastmod: Timestamp 4000|227, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:53 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:53 [conn5] request split points lookup for chunk test.foo { : 433.3806610330477 } -->> { : 437.040103636678 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|164||000000000000000000000000 min: { a: 167.6382092456179 } max: { a: 176.0230312595962 } dataWritten: 209850 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 248 version: 4|227||4fd97a3b0d2fef4d6a507be2 based on: 4|225||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:53 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|164||000000000000000000000000 min: { a: 167.6382092456179 } max: { a: 176.0230312595962 } on: { a: 170.2748683082939 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|227, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:53 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:53 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|74||000000000000000000000000 min: { a: 433.3806610330477 } max: { a: 437.040103636678 } dataWritten: 210066 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:53 [conn] chunk not full enough to trigger auto-split { a: 435.9916632574697 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 873.8718881199745 } -->> { : 877.8438233640235 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|86||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 877.8438233640235 } dataWritten: 210615 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 876.5596761531687 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 994.7222740534528 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|134||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 994.7222740534528 } dataWritten: 210514 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 993.9153292532875 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 501.5945768521381 } -->> { : 506.5947777056855 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|125||000000000000000000000000 min: { a: 501.5945768521381 } max: { a: 506.5947777056855 } dataWritten: 209763 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 503.9737511516869 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 836.3608305125814 } -->> { : 840.7121644073931 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|217||000000000000000000000000 min: { a: 836.3608305125814 } max: { a: 840.7121644073931 } dataWritten: 210634 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 839.0571148274014 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 184.9464054233513 } -->> { : 188.6698238706465 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|155||000000000000000000000000 min: { a: 184.9464054233513 } max: { a: 188.6698238706465 } dataWritten: 210223 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 187.6833302585019 }
m30000| Thu Jun 14 01:45:54 [conn11] request split points lookup for chunk test.foo { : 5.826356493812579 } -->> { : 12.55217658236718 }
m30000| Thu Jun 14 01:45:54 [conn11] max number of requested split points reached (2) before the end of chunk test.foo { : 5.826356493812579 } -->> { : 12.55217658236718 }
m30000| Thu Jun 14 01:45:54 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, from: "shard0000", splitKeys: [ { a: 8.457858050974988 } ], shardId: "test.foo-a_5.826356493812579", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:45:54 [conn11] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:45:54 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30000:1339652754:1563068116 (sleeping for 30000ms)
m30000| Thu Jun 14 01:45:54 [conn11] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339652754:1563068116' acquired, ts : 4fd97a9222746ab991410390
m30000| Thu Jun 14 01:45:54 [conn11] splitChunk accepted at version 4|0||4fd97a3b0d2fef4d6a507be2
m30000| Thu Jun 14 01:45:54 [conn11] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60402", time: new Date(1339652754355), what: "split", ns: "test.foo", details: { before: { min: { a: 5.826356493812579 }, max: { a: 12.55217658236718 }, lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 5.826356493812579 }, max: { a: 8.457858050974988 }, lastmod: Timestamp 4000|228, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 8.457858050974988 }, max: { a: 12.55217658236718 }, lastmod: Timestamp 4000|229, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { a: 5.826356493812579 } max: { a: 12.55217658236718 } dataWritten: 209789 splitThreshold: 1048576
m30000| Thu Jun 14 01:45:54 [initandlisten] connection accepted from 127.0.0.1:39146 #19 (17 connections now open)
m30000| Thu Jun 14 01:45:54 [conn11] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339652754:1563068116' unlocked.
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 249 version: 4|229||4fd97a3b0d2fef4d6a507be2 based on: 4|227||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 4|0||000000000000000000000000 min: { a: 5.826356493812579 } max: { a: 12.55217658236718 } on: { a: 8.457858050974988 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|227, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|166||000000000000000000000000 min: { a: 404.1458625239371 } max: { a: 407.0796926580036 } dataWritten: 210667 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 406.7374767445424 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 404.1458625239371 } -->> { : 407.0796926580036 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|48||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 991.2502100401695 } dataWritten: 210269 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 985.6773819217475 } -->> { : 991.2502100401695 }
m30001| Thu Jun 14 01:45:54 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 985.6773819217475 } -->> { : 991.2502100401695 }
m30001| Thu Jun 14 01:45:54 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, from: "shard0001", splitKeys: [ { a: 988.3510075746844 } ], shardId: "test.foo-a_985.6773819217475", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:54 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9232a28802daeee0ac
m30001| Thu Jun 14 01:45:54 [conn5] splitChunk accepted at version 4|227||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:54 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-251", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652754402), what: "split", ns: "test.foo", details: { before: { min: { a: 985.6773819217475 }, max: { a: 991.2502100401695 }, lastmod: Timestamp 2000|48, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 985.6773819217475 }, max: { a: 988.3510075746844 }, lastmod: Timestamp 4000|230, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 988.3510075746844 }, max: { a: 991.2502100401695 }, lastmod: Timestamp 4000|231, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 250 version: 4|231||4fd97a3b0d2fef4d6a507be2 based on: 4|229||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|48||000000000000000000000000 min: { a: 985.6773819217475 } max: { a: 991.2502100401695 } on: { a: 988.3510075746844 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|231, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } dataWritten: 210002 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:54 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 358.3343339611492 } -->> { : 363.6779080113047 }
m30001| Thu Jun 14 01:45:54 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, from: "shard0001", splitKeys: [ { a: 360.7881657776425 } ], shardId: "test.foo-a_358.3343339611492", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:54 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9232a28802daeee0ad
m30001| Thu Jun 14 01:45:54 [conn5] splitChunk accepted at version 4|231||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:54 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-252", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652754508), what: "split", ns: "test.foo", details: { before: { min: { a: 358.3343339611492 }, max: { a: 363.6779080113047 }, lastmod: Timestamp 2000|69, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 358.3343339611492 }, max: { a: 360.7881657776425 }, lastmod: Timestamp 4000|232, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 360.7881657776425 }, max: { a: 363.6779080113047 }, lastmod: Timestamp 4000|233, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 251 version: 4|233||4fd97a3b0d2fef4d6a507be2 based on: 4|231||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|69||000000000000000000000000 min: { a: 358.3343339611492 } max: { a: 363.6779080113047 } on: { a: 360.7881657776425 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|233, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|182||000000000000000000000000 min: { a: 558.0115575910545 } max: { a: 560.838593433049 } dataWritten: 210213 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 560.5272299948903 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|122||000000000000000000000000 min: { a: 840.7121644073931 } max: { a: 843.8858257205128 } dataWritten: 209726 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 843.1866688749371 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 558.0115575910545 } -->> { : 560.838593433049 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 840.7121644073931 } -->> { : 843.8858257205128 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 797.6352444405507 } -->> { : 802.4966878498034 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|10||000000000000000000000000 min: { a: 797.6352444405507 } max: { a: 802.4966878498034 } dataWritten: 210224 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 800.2025481737688 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 851.468355264985 } -->> { : 855.8703567421647 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|163||000000000000000000000000 min: { a: 851.468355264985 } max: { a: 855.8703567421647 } dataWritten: 210357 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 854.1110197382318 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 373.3849373054079 } -->> { : 378.3565272980204 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|27||000000000000000000000000 min: { a: 373.3849373054079 } max: { a: 378.3565272980204 } dataWritten: 210590 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 376.1188160272501 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|171||000000000000000000000000 min: { a: 228.7035403403385 } max: { a: 233.8565055904641 } dataWritten: 210533 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 228.7035403403385 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:45:54 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 228.7035403403385 } -->> { : 233.8565055904641 }
m30001| Thu Jun 14 01:45:54 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 228.7035403403385 }, max: { a: 233.8565055904641 }, from: "shard0001", splitKeys: [ { a: 231.249558963907 } ], shardId: "test.foo-a_228.7035403403385", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:54 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9232a28802daeee0ae
m30001| Thu Jun 14 01:45:54 [conn5] splitChunk accepted at version 4|233||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:54 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-253", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652754706), what: "split", ns: "test.foo", details: { before: { min: { a: 228.7035403403385 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 4000|171, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 228.7035403403385 }, max: { a: 231.249558963907 }, lastmod: Timestamp 4000|234, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 231.249558963907 }, max: { a: 233.8565055904641 }, lastmod: Timestamp 4000|235, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 252 version: 4|235||4fd97a3b0d2fef4d6a507be2 based on: 4|233||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|171||000000000000000000000000 min: { a: 228.7035403403385 } max: { a: 233.8565055904641 } on: { a: 231.249558963907 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|235, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|163||000000000000000000000000 min: { a: 851.468355264985 } max: { a: 855.8703567421647 } dataWritten: 210269 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 851.468355264985 } -->> { : 855.8703567421647 }
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 854.0985102984404 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 447.8806134954977 } dataWritten: 210557 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 253 version: 4|237||4fd97a3b0d2fef4d6a507be2 based on: 4|235||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|16||000000000000000000000000 min: { a: 441.0435238853461 } max: { a: 447.8806134954977 } on: { a: 443.7079718299926 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|237, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 441.0435238853461 } -->> { : 447.8806134954977 }
m30001| Thu Jun 14 01:45:54 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 441.0435238853461 } -->> { : 447.8806134954977 }
m30001| Thu Jun 14 01:45:54 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, from: "shard0001", splitKeys: [ { a: 443.7079718299926 } ], shardId: "test.foo-a_441.0435238853461", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:54 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9232a28802daeee0af
m30001| Thu Jun 14 01:45:54 [conn5] splitChunk accepted at version 4|235||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:54 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-254", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652754761), what: "split", ns: "test.foo", details: { before: { min: { a: 441.0435238853461 }, max: { a: 447.8806134954977 }, lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 441.0435238853461 }, max: { a: 443.7079718299926 }, lastmod: Timestamp 4000|236, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 443.7079718299926 }, max: { a: 447.8806134954977 }, lastmod: Timestamp 4000|237, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 47.94081917961535 } -->> { : 51.90923851177054 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|102||000000000000000000000000 min: { a: 47.94081917961535 } max: { a: 51.90923851177054 } dataWritten: 210207 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 50.61903290057723 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 802.4966878498034 } -->> { : 807.4105833931693 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|18||000000000000000000000000 min: { a: 802.4966878498034 } max: { a: 807.4105833931693 } dataWritten: 210619 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 805.2862333521945 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 615.3266278873516 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:45:54 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 615.3266278873516 } -->> { : 623.3985075048967 }
m30001| Thu Jun 14 01:45:54 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, from: "shard0001", splitKeys: [ { a: 617.9571577143996 } ], shardId: "test.foo-a_615.3266278873516", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:54 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9232a28802daeee0b0
m30001| Thu Jun 14 01:45:54 [conn5] splitChunk accepted at version 4|237||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:54 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:54-255", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652754870), what: "split", ns: "test.foo", details: { before: { min: { a: 615.3266278873516 }, max: { a: 623.3985075048967 }, lastmod: Timestamp 2000|63, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 615.3266278873516 }, max: { a: 617.9571577143996 }, lastmod: Timestamp 4000|238, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 617.9571577143996 }, max: { a: 623.3985075048967 }, lastmod: Timestamp 4000|239, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:54 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|63||000000000000000000000000 min: { a: 615.3266278873516 } max: { a: 623.3985075048967 } dataWritten: 210644 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 254 version: 4|239||4fd97a3b0d2fef4d6a507be2 based on: 4|237||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:54 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|63||000000000000000000000000 min: { a: 615.3266278873516 } max: { a: 623.3985075048967 } on: { a: 617.9571577143996 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|239, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:54 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 443.7079718299926 } -->> { : 447.8806134954977 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 664.5574284897642 } -->> { : 668.6362621623331 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|237||000000000000000000000000 min: { a: 443.7079718299926 } max: { a: 447.8806134954977 } dataWritten: 209860 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 446.3167803185943 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|38||000000000000000000000000 min: { a: 664.5574284897642 } max: { a: 668.6362621623331 } dataWritten: 210292 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 667.1599681571782 }
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 437.040103636678 } -->> { : 441.0435238853461 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|75||000000000000000000000000 min: { a: 437.040103636678 } max: { a: 441.0435238853461 } dataWritten: 210220 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 439.6930026853785 }
m30999| Thu Jun 14 01:45:54 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|198||000000000000000000000000 min: { a: 714.0536251380356 } max: { a: 717.0859810000978 } dataWritten: 210668 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:54 [conn5] request split points lookup for chunk test.foo { : 714.0536251380356 } -->> { : 717.0859810000978 }
m30999| Thu Jun 14 01:45:54 [conn] chunk not full enough to trigger auto-split { a: 716.7827714719174 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|216||000000000000000000000000 min: { a: 833.5963963333859 } max: { a: 836.3608305125814 } dataWritten: 210598 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 833.5963963333859 } -->> { : 836.3608305125814 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 836.1667076758572 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 353.2720479801309 } -->> { : 358.3343339611492 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 353.2720479801309 } -->> { : 358.3343339611492 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, from: "shard0001", splitKeys: [ { a: 355.8076820303829 } ], shardId: "test.foo-a_353.2720479801309", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b1
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|239||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-256", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755019), what: "split", ns: "test.foo", details: { before: { min: { a: 353.2720479801309 }, max: { a: 358.3343339611492 }, lastmod: Timestamp 2000|68, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 353.2720479801309 }, max: { a: 355.8076820303829 }, lastmod: Timestamp 4000|240, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 355.8076820303829 }, max: { a: 358.3343339611492 }, lastmod: Timestamp 4000|241, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 725.5771489434317 } -->> { : 729.8361633348899 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|68||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 358.3343339611492 } dataWritten: 210246 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 255 version: 4|241||4fd97a3b0d2fef4d6a507be2 based on: 4|239||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|68||000000000000000000000000 min: { a: 353.2720479801309 } max: { a: 358.3343339611492 } on: { a: 355.8076820303829 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|241, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|83||000000000000000000000000 min: { a: 725.5771489434317 } max: { a: 729.8361633348899 } dataWritten: 209927 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 727.9586522597881 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 378.3565272980204 } -->> { : 383.7239757530736 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 378.3565272980204 } -->> { : 383.7239757530736 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, from: "shard0001", splitKeys: [ { a: 380.9471963970786 } ], shardId: "test.foo-a_378.3565272980204", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b2
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|241||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-257", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755082), what: "split", ns: "test.foo", details: { before: { min: { a: 378.3565272980204 }, max: { a: 383.7239757530736 }, lastmod: Timestamp 2000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 378.3565272980204 }, max: { a: 380.9471963970786 }, lastmod: Timestamp 4000|242, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 380.9471963970786 }, max: { a: 383.7239757530736 }, lastmod: Timestamp 4000|243, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 689.5707127489441 } -->> { : 694.6501944983177 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 127.4590140914801 } -->> { : 131.8115136015859 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 531.7597013546634 } -->> { : 536.0462960134931 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 545.8257932837977 } -->> { : 548.9817180888258 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|36||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 383.7239757530736 } dataWritten: 209741 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 256 version: 4|243||4fd97a3b0d2fef4d6a507be2 based on: 4|241||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|36||000000000000000000000000 min: { a: 378.3565272980204 } max: { a: 383.7239757530736 } on: { a: 380.9471963970786 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|243, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|29||000000000000000000000000 min: { a: 689.5707127489441 } max: { a: 694.6501944983177 } dataWritten: 210736 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 692.0301853096332 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|46||000000000000000000000000 min: { a: 127.4590140914801 } max: { a: 131.8115136015859 } dataWritten: 209980 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 130.029198171601 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|22||000000000000000000000000 min: { a: 531.7597013546634 } max: { a: 536.0462960134931 } dataWritten: 209788 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 534.266192107448 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|150||000000000000000000000000 min: { a: 545.8257932837977 } max: { a: 548.9817180888258 } dataWritten: 210636 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 548.4094304956608 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 732.9348251743502 } -->> { : 735.4457009121708 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|202||000000000000000000000000 min: { a: 732.9348251743502 } max: { a: 735.4457009121708 } dataWritten: 209889 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 735.2578631205316 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 991.2502100401695 } -->> { : 994.7222740534528 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|134||000000000000000000000000 min: { a: 991.2502100401695 } max: { a: 994.7222740534528 } dataWritten: 210441 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 993.8074059163828 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 498.2021416153332 } -->> { : 501.5945768521381 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 254.1395685736485 } -->> { : 258.6206493525194 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|124||000000000000000000000000 min: { a: 498.2021416153332 } max: { a: 501.5945768521381 } dataWritten: 210369 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 500.7825651482878 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|30||000000000000000000000000 min: { a: 254.1395685736485 } max: { a: 258.6206493525194 } dataWritten: 210355 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 256.7029019512853 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|214||000000000000000000000000 min: { a: 20.02617482801994 } max: { a: 22.72135361925398 } dataWritten: 210466 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 22.40592506355343 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|57||000000000000000000000000 min: { a: 34.95140019143683 } max: { a: 39.89992532263464 } dataWritten: 210155 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 37.56466237632716 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|64||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 16.11151483141404 } dataWritten: 210377 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 14.93336201497075 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|83||000000000000000000000000 min: { a: 725.5771489434317 } max: { a: 729.8361633348899 } dataWritten: 210503 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 727.8872753943106 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|221||000000000000000000000000 min: { a: 95.6069228239147 } max: { a: 101.960589257945 } dataWritten: 210650 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 257 version: 4|245||4fd97a3b0d2fef4d6a507be2 based on: 4|243||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|221||000000000000000000000000 min: { a: 95.6069228239147 } max: { a: 101.960589257945 } on: { a: 98.16826107499755 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|245, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 20.02617482801994 } -->> { : 22.72135361925398 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 34.95140019143683 } -->> { : 39.89992532263464 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 12.55217658236718 } -->> { : 16.11151483141404 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 725.5771489434317 } -->> { : 729.8361633348899 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 95.6069228239147 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 95.6069228239147 } -->> { : 101.960589257945 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 95.6069228239147 }, max: { a: 101.960589257945 }, from: "shard0001", splitKeys: [ { a: 98.16826107499755 } ], shardId: "test.foo-a_95.6069228239147", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b3
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|243||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-258", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755285), what: "split", ns: "test.foo", details: { before: { min: { a: 95.6069228239147 }, max: { a: 101.960589257945 }, lastmod: Timestamp 4000|221, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 95.6069228239147 }, max: { a: 98.16826107499755 }, lastmod: Timestamp 4000|244, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 98.16826107499755 }, max: { a: 101.960589257945 }, lastmod: Timestamp 4000|245, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|214||000000000000000000000000 min: { a: 20.02617482801994 } max: { a: 22.72135361925398 } dataWritten: 210225 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 22.39615986279575 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|21||000000000000000000000000 min: { a: 289.7137301985317 } max: { a: 294.0222214358918 } dataWritten: 209827 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 292.2046310633953 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 20.02617482801994 } -->> { : 22.72135361925398 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 289.7137301985317 } -->> { : 294.0222214358918 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|57||000000000000000000000000 min: { a: 34.95140019143683 } max: { a: 39.89992532263464 } dataWritten: 210691 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 37.55499233386139 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 34.95140019143683 } -->> { : 39.89992532263464 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|86||000000000000000000000000 min: { a: 873.8718881199745 } max: { a: 877.8438233640235 } dataWritten: 209824 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 876.4821343865555 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 873.8718881199745 } -->> { : 877.8438233640235 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|125||000000000000000000000000 min: { a: 501.5945768521381 } max: { a: 506.5947777056855 } dataWritten: 210385 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 258 version: 4|247||4fd97a3b0d2fef4d6a507be2 based on: 4|245||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|125||000000000000000000000000 min: { a: 501.5945768521381 } max: { a: 506.5947777056855 } on: { a: 503.8814286501491 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|247, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 501.5945768521381 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 501.5945768521381 } -->> { : 506.5947777056855 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 501.5945768521381 }, max: { a: 506.5947777056855 }, from: "shard0001", splitKeys: [ { a: 503.8814286501491 } ], shardId: "test.foo-a_501.5945768521381", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b4
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|245||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-259", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755428), what: "split", ns: "test.foo", details: { before: { min: { a: 501.5945768521381 }, max: { a: 506.5947777056855 }, lastmod: Timestamp 4000|125, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 501.5945768521381 }, max: { a: 503.8814286501491 }, lastmod: Timestamp 4000|246, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 503.8814286501491 }, max: { a: 506.5947777056855 }, lastmod: Timestamp 4000|247, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|57||000000000000000000000000 min: { a: 34.95140019143683 } max: { a: 39.89992532263464 } dataWritten: 210545 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 37.54649819399069 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 34.95140019143683 } -->> { : 39.89992532263464 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|45||000000000000000000000000 min: { a: 106.0311910436654 } max: { a: 111.0431509615952 } dataWritten: 210676 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 108.4352613390478 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 106.0311910436654 } -->> { : 111.0431509615952 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|101||000000000000000000000000 min: { a: 220.5716558736682 } max: { a: 225.5962198744838 } dataWritten: 210611 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 259 version: 4|249||4fd97a3b0d2fef4d6a507be2 based on: 4|247||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|101||000000000000000000000000 min: { a: 220.5716558736682 } max: { a: 225.5962198744838 } on: { a: 222.9840106087572 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|249, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 220.5716558736682 } -->> { : 225.5962198744838 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 220.5716558736682 } -->> { : 225.5962198744838 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 220.5716558736682 }, max: { a: 225.5962198744838 }, from: "shard0001", splitKeys: [ { a: 222.9840106087572 } ], shardId: "test.foo-a_220.5716558736682", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b5
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|247||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-260", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755471), what: "split", ns: "test.foo", details: { before: { min: { a: 220.5716558736682 }, max: { a: 225.5962198744838 }, lastmod: Timestamp 4000|101, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 220.5716558736682 }, max: { a: 222.9840106087572 }, lastmod: Timestamp 4000|248, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 222.9840106087572 }, max: { a: 225.5962198744838 }, lastmod: Timestamp 4000|249, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|16||000000000000000000000000 min: { a: 773.3799848158397 } max: { a: 777.6503149863191 } dataWritten: 210091 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 775.6810399681909 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 773.3799848158397 } -->> { : 777.6503149863191 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 321.3459727153073 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 321.3459727153073 } -->> { : 327.5292321238884 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, from: "shard0001", splitKeys: [ { a: 323.8729876956295 } ], shardId: "test.foo-a_321.3459727153073", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b6
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|249||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-261", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755545), what: "split", ns: "test.foo", details: { before: { min: { a: 321.3459727153073 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 2000|51, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 321.3459727153073 }, max: { a: 323.8729876956295 }, lastmod: Timestamp 4000|250, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 323.8729876956295 }, max: { a: 327.5292321238884 }, lastmod: Timestamp 4000|251, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|51||000000000000000000000000 min: { a: 321.3459727153073 } max: { a: 327.5292321238884 } dataWritten: 209858 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 260 version: 4|251||4fd97a3b0d2fef4d6a507be2 based on: 4|249||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|51||000000000000000000000000 min: { a: 321.3459727153073 } max: { a: 327.5292321238884 } on: { a: 323.8729876956295 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|251, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 300.0603324337813 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 300.0603324337813 } -->> { : 309.3101713472285 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, from: "shard0001", splitKeys: [ { a: 302.7151830329477 } ], shardId: "test.foo-a_300.0603324337813", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b7
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|251||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-262", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755608), what: "split", ns: "test.foo", details: { before: { min: { a: 300.0603324337813 }, max: { a: 309.3101713472285 }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 300.0603324337813 }, max: { a: 302.7151830329477 }, lastmod: Timestamp 4000|252, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 302.7151830329477 }, max: { a: 309.3101713472285 }, lastmod: Timestamp 4000|253, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 181.7281932506388 } -->> { : 184.9464054233513 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|11||000000000000000000000000 min: { a: 300.0603324337813 } max: { a: 309.3101713472285 } dataWritten: 210172 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 261 version: 4|253||4fd97a3b0d2fef4d6a507be2 based on: 4|251||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|11||000000000000000000000000 min: { a: 300.0603324337813 } max: { a: 309.3101713472285 } on: { a: 302.7151830329477 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|253, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|154||000000000000000000000000 min: { a: 181.7281932506388 } max: { a: 184.9464054233513 } dataWritten: 209984 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 184.2940405262503 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 539.1281234038355 } -->> { : 542.4296058071777 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|189||000000000000000000000000 min: { a: 539.1281234038355 } max: { a: 542.4296058071777 } dataWritten: 210336 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 541.7678969551614 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 905.2934559328332 } -->> { : 910.9608546053483 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 905.2934559328332 } -->> { : 910.9608546053483 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, from: "shard0001", splitKeys: [ { a: 907.8304631917699 } ], shardId: "test.foo-a_905.2934559328332", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b8
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|253||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-263", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755649), what: "split", ns: "test.foo", details: { before: { min: { a: 905.2934559328332 }, max: { a: 910.9608546053483 }, lastmod: Timestamp 2000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 905.2934559328332 }, max: { a: 907.8304631917699 }, lastmod: Timestamp 4000|254, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 907.8304631917699 }, max: { a: 910.9608546053483 }, lastmod: Timestamp 4000|255, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|34||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 910.9608546053483 } dataWritten: 209894 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 262 version: 4|255||4fd97a3b0d2fef4d6a507be2 based on: 4|253||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|34||000000000000000000000000 min: { a: 905.2934559328332 } max: { a: 910.9608546053483 } on: { a: 907.8304631917699 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|255, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|229, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 5.826356493812579 } dataWritten: 210636 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 263 version: 4|257||4fd97a3b0d2fef4d6a507be2 based on: 4|255||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { a: 0.07367152018367129 } max: { a: 5.826356493812579 } on: { a: 2.742599007396374 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|255, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30000| Thu Jun 14 01:45:55 [conn11] request split points lookup for chunk test.foo { : 0.07367152018367129 } -->> { : 5.826356493812579 }
m30000| Thu Jun 14 01:45:55 [conn11] max number of requested split points reached (2) before the end of chunk test.foo { : 0.07367152018367129 } -->> { : 5.826356493812579 }
m30000| Thu Jun 14 01:45:55 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, from: "shard0000", splitKeys: [ { a: 2.742599007396374 } ], shardId: "test.foo-a_0.07367152018367129", configdb: "localhost:30000" }
m30000| Thu Jun 14 01:45:55 [conn11] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Thu Jun 14 01:45:55 [conn11] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339652754:1563068116' acquired, ts : 4fd97a9322746ab991410391
m30000| Thu Jun 14 01:45:55 [conn11] splitChunk accepted at version 4|229||4fd97a3b0d2fef4d6a507be2
m30000| Thu Jun 14 01:45:55 [conn11] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:60402", time: new Date(1339652755687), what: "split", ns: "test.foo", details: { before: { min: { a: 0.07367152018367129 }, max: { a: 5.826356493812579 }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.07367152018367129 }, max: { a: 2.742599007396374 }, lastmod: Timestamp 4000|256, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 2.742599007396374 }, max: { a: 5.826356493812579 }, lastmod: Timestamp 4000|257, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30000| Thu Jun 14 01:45:55 [conn11] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30000:1339652754:1563068116' unlocked.
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 590.8997745355827 } -->> { : 594.3878051880898 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|90||000000000000000000000000 min: { a: 590.8997745355827 } max: { a: 594.3878051880898 } dataWritten: 209841 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 593.2309566333259 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 114.9662096443472 } -->> { : 118.3157678917793 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|156||000000000000000000000000 min: { a: 114.9662096443472 } max: { a: 118.3157678917793 } dataWritten: 210269 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 117.6413334457005 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 756.637103632288 } -->> { : 761.349721153896 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|55||000000000000000000000000 min: { a: 756.637103632288 } max: { a: 761.349721153896 } dataWritten: 210233 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 759.0629521011773 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 101.960589257945 } -->> { : 106.0311910436654 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|44||000000000000000000000000 min: { a: 101.960589257945 } max: { a: 106.0311910436654 } dataWritten: 210707 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 104.5974353320599 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 594.3878051880898 } -->> { : 599.2155367136296 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|91||000000000000000000000000 min: { a: 594.3878051880898 } max: { a: 599.2155367136296 } dataWritten: 210086 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 596.9351148569995 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 761.349721153896 } -->> { : 765.2211241548246 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|70||000000000000000000000000 min: { a: 761.349721153896 } max: { a: 765.2211241548246 } dataWritten: 210704 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 763.795576388765 }
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 506.5947777056855 } -->> { : 510.639225969218 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|36||000000000000000000000000 min: { a: 506.5947777056855 } max: { a: 510.639225969218 } dataWritten: 210580 splitThreshold: 1048576
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 508.9511945596606 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|207||000000000000000000000000 min: { a: 675.1811603867598 } max: { a: 678.3563510786536 } dataWritten: 210403 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 675.1811603867598 } -->> { : 678.3563510786536 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 677.7238765016909 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|209||000000000000000000000000 min: { a: 395.6502767966605 } max: { a: 400.6101810646703 } dataWritten: 210306 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 395.6502767966605 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 395.6502767966605 } -->> { : 400.6101810646703 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 395.6502767966605 }, max: { a: 400.6101810646703 }, from: "shard0001", splitKeys: [ { a: 398.1780778922134 } ], shardId: "test.foo-a_395.6502767966605", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0b9
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|255||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-264", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755901), what: "split", ns: "test.foo", details: { before: { min: { a: 395.6502767966605 }, max: { a: 400.6101810646703 }, lastmod: Timestamp 4000|209, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 395.6502767966605 }, max: { a: 398.1780778922134 }, lastmod: Timestamp 4000|258, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 398.1780778922134 }, max: { a: 400.6101810646703 }, lastmod: Timestamp 4000|259, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 264 version: 4|259||4fd97a3b0d2fef4d6a507be2 based on: 4|257||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|209||000000000000000000000000 min: { a: 395.6502767966605 } max: { a: 400.6101810646703 } on: { a: 398.1780778922134 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|259, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|75||000000000000000000000000 min: { a: 437.040103636678 } max: { a: 441.0435238853461 } dataWritten: 210716 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 437.040103636678 } -->> { : 441.0435238853461 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 439.5886910285358 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|26||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 558.0115575910545 } dataWritten: 210260 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 552.1925267328988 } -->> { : 558.0115575910545 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 552.1925267328988 } -->> { : 558.0115575910545 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, from: "shard0001", splitKeys: [ { a: 554.5352736346487 } ], shardId: "test.foo-a_552.1925267328988", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0ba
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|259||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-265", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755942), what: "split", ns: "test.foo", details: { before: { min: { a: 552.1925267328988 }, max: { a: 558.0115575910545 }, lastmod: Timestamp 2000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 552.1925267328988 }, max: { a: 554.5352736346487 }, lastmod: Timestamp 4000|260, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 554.5352736346487 }, max: { a: 558.0115575910545 }, lastmod: Timestamp 4000|261, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 265 version: 4|261||4fd97a3b0d2fef4d6a507be2 based on: 4|259||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|26||000000000000000000000000 min: { a: 552.1925267328988 } max: { a: 558.0115575910545 } on: { a: 554.5352736346487 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|261, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|64||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 16.11151483141404 } dataWritten: 210507 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 12.55217658236718 } -->> { : 16.11151483141404 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 14.85411328985942 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|62||000000000000000000000000 min: { a: 327.5292321238884 } max: { a: 331.4018789379612 } dataWritten: 210302 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 327.5292321238884 } -->> { : 331.4018789379612 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 329.8610365039933 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|52||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 521.3538677091974 } dataWritten: 210602 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 515.6449770586091 } -->> { : 521.3538677091974 }
m30001| Thu Jun 14 01:45:55 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 515.6449770586091 } -->> { : 521.3538677091974 }
m30001| Thu Jun 14 01:45:55 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, from: "shard0001", splitKeys: [ { a: 518.2463999492195 } ], shardId: "test.foo-a_515.6449770586091", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:55 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9332a28802daeee0bb
m30001| Thu Jun 14 01:45:55 [conn5] splitChunk accepted at version 4|261||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:55 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:55-266", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652755962), what: "split", ns: "test.foo", details: { before: { min: { a: 515.6449770586091 }, max: { a: 521.3538677091974 }, lastmod: Timestamp 2000|52, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 515.6449770586091 }, max: { a: 518.2463999492195 }, lastmod: Timestamp 4000|262, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 518.2463999492195 }, max: { a: 521.3538677091974 }, lastmod: Timestamp 4000|263, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:55 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:55 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 266 version: 4|263||4fd97a3b0d2fef4d6a507be2 based on: 4|261||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:55 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 2|52||000000000000000000000000 min: { a: 515.6449770586091 } max: { a: 521.3538677091974 } on: { a: 518.2463999492195 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|263, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|61||000000000000000000000000 min: { a: 636.2085863336085 } max: { a: 640.7093733209429 } dataWritten: 210625 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 636.2085863336085 } -->> { : 640.7093733209429 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 638.7084164545204 }
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:55 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|135||000000000000000000000000 min: { a: 994.7222740534528 } max: { a: 998.3975234740553 } dataWritten: 210178 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 994.7222740534528 } -->> { : 998.3975234740553 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 997.3179459511309 }
m30999| Thu Jun 14 01:45:55 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|74||000000000000000000000000 min: { a: 433.3806610330477 } max: { a: 437.040103636678 } dataWritten: 210316 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:55 [conn5] request split points lookup for chunk test.foo { : 433.3806610330477 } -->> { : 437.040103636678 }
m30999| Thu Jun 14 01:45:55 [conn] chunk not full enough to trigger auto-split { a: 435.811351208542 }
m30001| Thu Jun 14 01:45:56 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:51624144 295ms
m30001| Thu Jun 14 01:45:56 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:51969869 343ms
m30999| Thu Jun 14 01:45:56 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|138||000000000000000000000000 min: { a: 657.3538695372831 } max: { a: 660.6896106858891 } dataWritten: 210609 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:56 [conn5] request split points lookup for chunk test.foo { : 657.3538695372831 } -->> { : 660.6896106858891 }
m30999| Thu Jun 14 01:45:56 [conn] chunk not full enough to trigger auto-split { a: 659.7372111813143 }
m30999| Thu Jun 14 01:45:56 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|31||000000000000000000000000 min: { a: 258.6206493525194 } max: { a: 264.0825842924789 } dataWritten: 209872 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:56 [conn5] request split points lookup for chunk test.foo { : 258.6206493525194 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:45:56 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 258.6206493525194 } -->> { : 264.0825842924789 }
m30001| Thu Jun 14 01:45:56 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 258.6206493525194 }, max: { a: 264.0825842924789 }, from: "shard0001", splitKeys: [ { a: 261.2663901230094 } ], shardId: "test.foo-a_258.6206493525194", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:56 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:56 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9432a28802daeee0bc
m30001| Thu Jun 14 01:45:56 [conn5] splitChunk accepted at version 4|263||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:56 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:56-267", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652756647), what: "split", ns: "test.foo", details: { before: { min: { a: 258.6206493525194 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 4000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 258.6206493525194 }, max: { a: 261.2663901230094 }, lastmod: Timestamp 4000|264, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 261.2663901230094 }, max: { a: 264.0825842924789 }, lastmod: Timestamp 4000|265, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:56 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:56 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 267 version: 4|265||4fd97a3b0d2fef4d6a507be2 based on: 4|263||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:56 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|31||000000000000000000000000 min: { a: 258.6206493525194 } max: { a: 264.0825842924789 } on: { a: 261.2663901230094 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:56 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|265, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:56 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:56 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:52114305 104ms
m30999| Thu Jun 14 01:45:56 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|147||000000000000000000000000 min: { a: 914.1361338478089 } max: { a: 918.4259760765641 } dataWritten: 210669 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:56 [conn5] request split points lookup for chunk test.foo { : 914.1361338478089 } -->> { : 918.4259760765641 }
m30999| Thu Jun 14 01:45:56 [conn] chunk not full enough to trigger auto-split { a: 916.5605677848788 }
m30999| Thu Jun 14 01:45:56 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:56 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:45:57 [conn3] insert test.foo keyUpdates:0 locks(micros) W:5508 r:7859660 w:52363816 247ms
m30999| Thu Jun 14 01:45:57 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|51||000000000000000000000000 min: { a: 163.3701742796004 } max: { a: 167.6382092456179 } dataWritten: 210042 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:57 [conn5] request split points lookup for chunk test.foo { : 163.3701742796004 } -->> { : 167.6382092456179 }
m30999| Thu Jun 14 01:45:57 [conn] chunk not full enough to trigger auto-split { a: 165.8158078202064 }
m30999| Thu Jun 14 01:45:57 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|52||000000000000000000000000 min: { a: 648.6747268265868 } max: { a: 652.9401841699823 } dataWritten: 210769 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:57 [conn5] request split points lookup for chunk test.foo { : 648.6747268265868 } -->> { : 652.9401841699823 }
m30999| Thu Jun 14 01:45:57 [conn] chunk not full enough to trigger auto-split { a: 651.3464677448329 }
m30999| Thu Jun 14 01:45:57 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|211||000000000000000000000000 min: { a: 242.6421093833427 } max: { a: 248.3080159156712 } dataWritten: 210746 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:57 [conn5] request split points lookup for chunk test.foo { : 242.6421093833427 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:45:57 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 242.6421093833427 } -->> { : 248.3080159156712 }
m30001| Thu Jun 14 01:45:57 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 242.6421093833427 }, max: { a: 248.3080159156712 }, from: "shard0001", splitKeys: [ { a: 245.1924455307789 } ], shardId: "test.foo-a_242.6421093833427", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:57 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:57 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9532a28802daeee0bd
m30001| Thu Jun 14 01:45:57 [conn5] splitChunk accepted at version 4|265||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:57 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:57-268", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652757104), what: "split", ns: "test.foo", details: { before: { min: { a: 242.6421093833427 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 4000|211, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 242.6421093833427 }, max: { a: 245.1924455307789 }, lastmod: Timestamp 4000|266, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') }, right: { min: { a: 245.1924455307789 }, max: { a: 248.3080159156712 }, lastmod: Timestamp 4000|267, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2') } } }
m30001| Thu Jun 14 01:45:57 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30999| Thu Jun 14 01:45:57 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 268 version: 4|267||4fd97a3b0d2fef4d6a507be2 based on: 4|265||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:57 [conn] autosplitted test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|211||000000000000000000000000 min: { a: 242.6421093833427 } max: { a: 248.3080159156712 } on: { a: 245.1924455307789 } (splitThreshold 1048576)
m30999| Thu Jun 14 01:45:57 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|267, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:45:57 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:57 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|120||000000000000000000000000 min: { a: 83.77384564239721 } max: { a: 87.41840730135154 } dataWritten: 209822 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:57 [conn5] request split points lookup for chunk test.foo { : 83.77384564239721 } -->> { : 87.41840730135154 }
m30999| Thu Jun 14 01:45:57 [conn] chunk not full enough to trigger auto-split { a: 86.30750827144318 }
m30999| Thu Jun 14 01:45:57 [conn] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 4|194||000000000000000000000000 min: { a: 970.39026226179 } max: { a: 973.4895868865218 } dataWritten: 210333 splitThreshold: 1048576
m30001| Thu Jun 14 01:45:57 [conn5] request split points lookup for chunk test.foo { : 970.39026226179 } -->> { : 973.4895868865218 }
m30999| Thu Jun 14 01:45:57 [conn] chunk not full enough to trigger auto-split { a: 972.998624758458 }
m30999| Thu Jun 14 01:45:57 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|257, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:45:57 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:45:58 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:45:58 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30999| Thu Jun 14 01:45:58 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:45:58 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:45:58 [Balancer] connected connection!
m30999| Thu Jun 14 01:45:58 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:45:58 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:45:58 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a960d2fef4d6a507bea" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a780d2fef4d6a507be9" } }
m30999| Thu Jun 14 01:45:58 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a960d2fef4d6a507bea
m30999| Thu Jun 14 01:45:58 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:45:58 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:45:58 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:58 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:58 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:45:58 [Balancer] shard0000
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 4000|256, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 2.742599007396374 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_2.742599007396374", lastmod: Timestamp 4000|257, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 2.742599007396374 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|228, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 8.457858050974988 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_8.457858050974988", lastmod: Timestamp 4000|229, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 8.457858050974988 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] shard0001
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 4000|72, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_20.02617482801994", lastmod: Timestamp 4000|214, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 20.02617482801994 }, max: { a: 22.72135361925398 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_22.72135361925398", lastmod: Timestamp 4000|215, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 22.72135361925398 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_34.95140019143683", lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_43.98990958864879", lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_51.90923851177054", lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_61.76919454003927", lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_70.06331619195872", lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_78.73686651492073", lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_87.41840730135154", lastmod: Timestamp 4000|196, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 87.41840730135154 }, max: { a: 89.89791872458619 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_89.89791872458619", lastmod: Timestamp 4000|197, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 89.89791872458619 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 4000|220, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 95.6069228239147 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_95.6069228239147", lastmod: Timestamp 4000|244, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 95.6069228239147 }, max: { a: 98.16826107499755 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_98.16826107499755", lastmod: Timestamp 4000|245, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 98.16826107499755 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_106.0311910436654", lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_114.9662096443472", lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_118.3157678917793", lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_127.4590140914801", lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_131.8115136015859", lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_141.1884883168546", lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_150.1357777689222", lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_153.684305048146", lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_163.3701742796004", lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 4000|226, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 170.2748683082939 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_170.2748683082939", lastmod: Timestamp 4000|227, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 170.2748683082939 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 4000|224, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 178.4802269484291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_178.4802269484291", lastmod: Timestamp 4000|225, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 178.4802269484291 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_184.9464054233513", lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_191.5307698720086", lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_198.5601903660538", lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 4000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 207.0875453859469 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_207.0875453859469", lastmod: Timestamp 4000|185, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 207.0875453859469 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 4000|204, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 212.8104857756458 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_212.8104857756458", lastmod: Timestamp 4000|205, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 212.8104857756458 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_220.5716558736682", lastmod: Timestamp 4000|248, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 220.5716558736682 }, max: { a: 222.9840106087572 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_222.9840106087572", lastmod: Timestamp 4000|249, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 222.9840106087572 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_228.7035403403385", lastmod: Timestamp 4000|234, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 228.7035403403385 }, max: { a: 231.249558963907 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_231.249558963907", lastmod: Timestamp 4000|235, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 231.249558963907 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 4000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 236.7690508533622 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_236.7690508533622", lastmod: Timestamp 4000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 236.7690508533622 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 4000|210, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 242.6421093833427 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_242.6421093833427", lastmod: Timestamp 4000|266, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 242.6421093833427 }, max: { a: 245.1924455307789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_245.1924455307789", lastmod: Timestamp 4000|267, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 245.1924455307789 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 4000|222, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 250.7993295308498 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_250.7993295308498", lastmod: Timestamp 4000|223, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 250.7993295308498 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_258.6206493525194", lastmod: Timestamp 4000|264, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 258.6206493525194 }, max: { a: 261.2663901230094 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_261.2663901230094", lastmod: Timestamp 4000|265, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 261.2663901230094 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_280.6827052136106", lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_289.7137301985317", lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 4000|252, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 302.7151830329477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_302.7151830329477", lastmod: Timestamp 4000|253, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 302.7151830329477 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 4000|186, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 312.3135459595852 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_312.3135459595852", lastmod: Timestamp 4000|187, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 312.3135459595852 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 4000|250, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 323.8729876956295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_323.8729876956295", lastmod: Timestamp 4000|251, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 323.8729876956295 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_331.4018789379612", lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_334.3168575448847", lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 4000|212, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 340.4008653065953 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_340.4008653065953", lastmod: Timestamp 4000|213, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 340.4008653065953 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_349.1094580993942", lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 4000|240, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 355.8076820303829 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_355.8076820303829", lastmod: Timestamp 4000|241, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 355.8076820303829 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 4000|232, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 360.7881657776425 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_360.7881657776425", lastmod: Timestamp 4000|233, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 360.7881657776425 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_373.3849373054079", lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 4000|242, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 380.9471963970786 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_380.9471963970786", lastmod: Timestamp 4000|243, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 380.9471963970786 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_387.7659705009871", lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 4000|208, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 395.6502767966605 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_395.6502767966605", lastmod: Timestamp 4000|258, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 395.6502767966605 }, max: { a: 398.1780778922134 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_398.1780778922134", lastmod: Timestamp 4000|259, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 398.1780778922134 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_404.1458625239371", lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_407.0796926580036", lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 4000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 413.7945438036655 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_413.7945438036655", lastmod: Timestamp 4000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 413.7945438036655 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_430.2130944220548", lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_437.040103636678", lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 4000|236, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 443.7079718299926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_443.7079718299926", lastmod: Timestamp 4000|237, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 443.7079718299926 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_451.8120411874291", lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_459.7315330482733", lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 4000|218, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 466.1607312365173 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_466.1607312365173", lastmod: Timestamp 4000|219, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 466.1607312365173 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_477.2807394020033", lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_480.2747403619077", lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_493.6797279933101", lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_501.5945768521381", lastmod: Timestamp 4000|246, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 501.5945768521381 }, max: { a: 503.8814286501491 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_503.8814286501491", lastmod: Timestamp 4000|247, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 503.8814286501491 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_510.639225969218", lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 4000|262, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 518.2463999492195 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_518.2463999492195", lastmod: Timestamp 4000|263, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 518.2463999492195 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_531.7597013546634", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_536.0462960134931", lastmod: Timestamp 4000|188, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 536.0462960134931 }, max: { a: 539.1281234038355 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_539.1281234038355", lastmod: Timestamp 4000|189, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 539.1281234038355 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_545.8257932837977", lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_548.9817180888258", lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 4000|260, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 554.5352736346487 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_554.5352736346487", lastmod: Timestamp 4000|261, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 554.5352736346487 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 4000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 560.838593433049 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_560.838593433049", lastmod: Timestamp 4000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 560.838593433049 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_567.3645636091692", lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_575.2102660145707", lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_584.4225320226172", lastmod: Timestamp 4000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 584.4225320226172 }, max: { a: 587.1685851091131 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_587.1685851091131", lastmod: Timestamp 4000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 587.1685851091131 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_594.3878051880898", lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_603.53104016638", lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 4000|238, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 617.9571577143996 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_617.9571577143996", lastmod: Timestamp 4000|239, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 617.9571577143996 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_628.1995001147562", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_632.4786347534061", lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_636.2085863336085", lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_644.4017960752651", lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_652.9401841699823", lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_660.6896106858891", lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_668.6362621623331", lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_672.2870891659105", lastmod: Timestamp 4000|206, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 672.2870891659105 }, max: { a: 675.1811603867598 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_675.1811603867598", lastmod: Timestamp 4000|207, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 675.1811603867598 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_681.3003030169281", lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_689.5707127489441", lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_698.4329238257609", lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 4000|198, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 717.0859810000978 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_717.0859810000978", lastmod: Timestamp 4000|199, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 717.0859810000978 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_725.5771489434317", lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_732.9348251743502", lastmod: Timestamp 4000|202, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 732.9348251743502 }, max: { a: 735.4457009121708 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_735.4457009121708", lastmod: Timestamp 4000|203, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 735.4457009121708 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 4000|192, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 741.3245176669844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_741.3245176669844", lastmod: Timestamp 4000|193, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 741.3245176669844 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_748.6872188241756", lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_756.637103632288", lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_765.2211241548246", lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_768.6399184840259", lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_777.6503149863191", lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_780.6933276463033", lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 4000|200, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 787.2181223195419 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_787.2181223195419", lastmod: Timestamp 4000|201, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 787.2181223195419 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_793.7120312511385", lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_802.4966878498034", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_807.4105833931693", lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_810.8918013325706", lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 4000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 824.2680954051706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_824.2680954051706", lastmod: Timestamp 4000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 824.2680954051706 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 4000|216, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 836.3608305125814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_836.3608305125814", lastmod: Timestamp 4000|217, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 836.3608305125814 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_843.8858257205128", lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_851.468355264985", lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_864.7746195980726", lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_877.8438233640235", lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_886.5207670748756", lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 4000|190, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 894.8106130543974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_894.8106130543974", lastmod: Timestamp 4000|191, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 894.8106130543974 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_901.6037051063506", lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 4000|254, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 907.8304631917699 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_907.8304631917699", lastmod: Timestamp 4000|255, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 907.8304631917699 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_914.1361338478089", lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_921.5853246168082", lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_951.1531632632295", lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_960.5824651536831", lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 4000|194, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 973.4895868865218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_973.4895868865218", lastmod: Timestamp 4000|195, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 973.4895868865218 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_980.667776515926", lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 4000|230, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 988.3510075746844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_988.3510075746844", lastmod: Timestamp 4000|231, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 988.3510075746844 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_994.7222740534528", lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] ----
m30999| Thu Jun 14 01:45:58 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:45:58 [Balancer] donor : 256 chunks on shard0001
m30999| Thu Jun 14 01:45:58 [Balancer] receiver : 5 chunks on shard0000
m30999| Thu Jun 14 01:45:58 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 4000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:45:58 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:58 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:45:58 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:45:58 [Balancer] shard0000
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, max: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, max: { _id: ObjectId('4fd97a3d05a35677eff23246') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, max: { _id: ObjectId('4fd97a3d05a35677eff23611') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, max: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, max: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24176') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, max: { _id: ObjectId('4fd97a3d05a35677eff24541') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24727') }, max: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, max: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, max: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25295') }, max: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25663') }, max: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, max: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, max: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff26598') }, max: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26964') }, max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27105') }, max: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, max: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, max: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, max: { _id: ObjectId('4fd97a3f05a35677eff28226') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, max: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff287d7') }, max: { _id: ObjectId('4fd97a4005a35677eff289bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, max: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28f71') }, max: { _id: ObjectId('4fd97a4005a35677eff29159') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2933f') }, max: { _id: ObjectId('4fd97a4005a35677eff29523') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29708') }, max: { _id: ObjectId('4fd97a4005a35677eff298ed') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, max: { _id: ObjectId('4fd97a4005a35677eff29cba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, max: { _id: ObjectId('4fd97a4005a35677eff2a086') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, max: { _id: ObjectId('4fd97a4005a35677eff2a450') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a636') }, max: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, max: { _id: ObjectId('4fd97a4105a35677eff2abea') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2add0') }, max: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, max: { _id: ObjectId('4fd97a4105a35677eff2b387') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, max: { _id: ObjectId('4fd97a4105a35677eff2b757') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, max: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, max: { _id: ObjectId('4fd97a4205a35677eff2beee') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')", lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, max: { _id: ObjectId('4fd97a4205a35677eff2c687') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, max: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')", lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, max: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d008') }, max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, max: { _id: ObjectId('4fd97a4305a35677eff2d986') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, max: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, max: { _id: ObjectId('4fd97a4305a35677eff2e127') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, max: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, max: { _id: ObjectId('4fd97a4305a35677eff2f052') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f239') }, max: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f603') }, max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, max: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3016a') }, max: { _id: ObjectId('4fd97a4405a35677eff30351') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30537') }, max: { _id: ObjectId('4fd97a4405a35677eff30721') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30907') }, max: { _id: ObjectId('4fd97a4405a35677eff30aef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, max: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff310a7') }, max: { _id: ObjectId('4fd97a4405a35677eff3128e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31473') }, max: { _id: ObjectId('4fd97a4405a35677eff3165b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31841') }, max: { _id: ObjectId('4fd97a4405a35677eff31a28') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, max: { _id: ObjectId('4fd97a4405a35677eff31df3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31fda') }, max: { _id: ObjectId('4fd97a4405a35677eff321bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff323a4') }, max: { _id: ObjectId('4fd97a4405a35677eff3258c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32774') }, max: { _id: ObjectId('4fd97a4505a35677eff32958') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, max: { _id: ObjectId('4fd97a4505a35677eff32d23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, max: { _id: ObjectId('4fd97a4505a35677eff330f5') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff332d9') }, max: { _id: ObjectId('4fd97a4505a35677eff334c2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff336ab') }, max: { _id: ObjectId('4fd97a4505a35677eff33891') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33a77') }, max: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33e41') }, max: { _id: ObjectId('4fd97a4605a35677eff34026') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff3420d') }, max: { _id: ObjectId('4fd97a4605a35677eff343f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff345d9') }, max: { _id: ObjectId('4fd97a4605a35677eff347c1') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff349a9') }, max: { _id: ObjectId('4fd97a4705a35677eff34b90') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34d79') }, max: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35147') }, max: { _id: ObjectId('4fd97a4705a35677eff3532c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35511') }, max: { _id: ObjectId('4fd97a4705a35677eff356fa') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff358e1') }, max: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35cab') }, max: { _id: ObjectId('4fd97a4705a35677eff35e91') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3607a') }, max: { _id: ObjectId('4fd97a4805a35677eff3625f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36447') }, max: { _id: ObjectId('4fd97a4805a35677eff3662c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36814') }, max: { _id: ObjectId('4fd97a4805a35677eff369f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36be0') }, max: { _id: ObjectId('4fd97a4805a35677eff36dca') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36faf') }, max: { _id: ObjectId('4fd97a4805a35677eff37195') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3737a') }, max: { _id: ObjectId('4fd97a4805a35677eff37560') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37747') }, max: { _id: ObjectId('4fd97a4905a35677eff3792f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37b15') }, max: { _id: ObjectId('4fd97a4905a35677eff37cff') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, max: { _id: ObjectId('4fd97a4905a35677eff380d0') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff382b9') }, max: { _id: ObjectId('4fd97a4905a35677eff3849e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')", lastmod: Timestamp 1000|185, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38684') }, max: { _id: ObjectId('4fd97a4905a35677eff38869') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')", lastmod: Timestamp 1000|187, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, max: { _id: ObjectId('4fd97a4905a35677eff38c32') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')", lastmod: Timestamp 1000|189, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, max: { _id: ObjectId('4fd97a4905a35677eff39001') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')", lastmod: Timestamp 1000|191, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff391e8') }, max: { _id: ObjectId('4fd97a4905a35677eff393cf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')", lastmod: Timestamp 1000|193, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff395b6') }, max: { _id: ObjectId('4fd97a4905a35677eff3979b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')", lastmod: Timestamp 1000|195, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39985') }, max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')", lastmod: Timestamp 1000|197, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, max: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')", lastmod: Timestamp 1000|199, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')", lastmod: Timestamp 1000|201, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')", lastmod: Timestamp 1000|203, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')", lastmod: Timestamp 1000|205, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:45:58 [Balancer] shard0001
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, max: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, max: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23246') }, max: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23611') }, max: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24176') }, max: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24541') }, max: { _id: ObjectId('4fd97a3d05a35677eff24727') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, max: { _id: ObjectId('4fd97a3e05a35677eff25295') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, max: { _id: ObjectId('4fd97a3e05a35677eff25663') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, max: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, max: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, max: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, max: { _id: ObjectId('4fd97a3e05a35677eff26598') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, max: { _id: ObjectId('4fd97a3f05a35677eff26964') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, max: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27105') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, max: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, max: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, max: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff28226') }, max: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, max: { _id: ObjectId('4fd97a4005a35677eff287d7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff289bf') }, max: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, max: { _id: ObjectId('4fd97a4005a35677eff28f71') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29159') }, max: { _id: ObjectId('4fd97a4005a35677eff2933f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29523') }, max: { _id: ObjectId('4fd97a4005a35677eff29708') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff298ed') }, max: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29cba') }, max: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a086') }, max: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a450') }, max: { _id: ObjectId('4fd97a4105a35677eff2a636') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, max: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2abea') }, max: { _id: ObjectId('4fd97a4105a35677eff2add0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b387') }, max: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b757') }, max: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, max: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2beee') }, max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')", lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c687') }, max: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, max: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')", lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, max: { _id: ObjectId('4fd97a4205a35677eff2d008') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')", lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')", lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2d986') }, max: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')", lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, max: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e127') }, max: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')", lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')", lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')", lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f052') }, max: { _id: ObjectId('4fd97a4305a35677eff2f239') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, max: { _id: ObjectId('4fd97a4305a35677eff2f603') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')", lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, max: { _id: ObjectId('4fd97a4405a35677eff3016a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30351') }, max: { _id: ObjectId('4fd97a4405a35677eff30537') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')", lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30721') }, max: { _id: ObjectId('4fd97a4405a35677eff30907') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30aef') }, max: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')", lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, max: { _id: ObjectId('4fd97a4405a35677eff310a7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3128e') }, max: { _id: ObjectId('4fd97a4405a35677eff31473') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3165b') }, max: { _id: ObjectId('4fd97a4405a35677eff31841') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31a28') }, max: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31df3') }, max: { _id: ObjectId('4fd97a4405a35677eff31fda') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff321bf') }, max: { _id: ObjectId('4fd97a4405a35677eff323a4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3258c') }, max: { _id: ObjectId('4fd97a4505a35677eff32774') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32958') }, max: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32d23') }, max: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff330f5') }, max: { _id: ObjectId('4fd97a4505a35677eff332d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff334c2') }, max: { _id: ObjectId('4fd97a4505a35677eff336ab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff33891') }, max: { _id: ObjectId('4fd97a4605a35677eff33a77') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')", lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, max: { _id: ObjectId('4fd97a4605a35677eff33e41') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff34026') }, max: { _id: ObjectId('4fd97a4605a35677eff3420d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff343f3') }, max: { _id: ObjectId('4fd97a4605a35677eff345d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')", lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff347c1') }, max: { _id: ObjectId('4fd97a4605a35677eff349a9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34b90') }, max: { _id: ObjectId('4fd97a4705a35677eff34d79') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, max: { _id: ObjectId('4fd97a4705a35677eff35147') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff3532c') }, max: { _id: ObjectId('4fd97a4705a35677eff35511') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff356fa') }, max: { _id: ObjectId('4fd97a4705a35677eff358e1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, max: { _id: ObjectId('4fd97a4705a35677eff35cab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35e91') }, max: { _id: ObjectId('4fd97a4805a35677eff3607a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3625f') }, max: { _id: ObjectId('4fd97a4805a35677eff36447') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3662c') }, max: { _id: ObjectId('4fd97a4805a35677eff36814') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff369f9') }, max: { _id: ObjectId('4fd97a4805a35677eff36be0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36dca') }, max: { _id: ObjectId('4fd97a4805a35677eff36faf') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37195') }, max: { _id: ObjectId('4fd97a4805a35677eff3737a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37560') }, max: { _id: ObjectId('4fd97a4905a35677eff37747') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3792f') }, max: { _id: ObjectId('4fd97a4905a35677eff37b15') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37cff') }, max: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff380d0') }, max: { _id: ObjectId('4fd97a4905a35677eff382b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3849e') }, max: { _id: ObjectId('4fd97a4905a35677eff38684') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')", lastmod: Timestamp 1000|186, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38869') }, max: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')", lastmod: Timestamp 1000|188, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38c32') }, max: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')", lastmod: Timestamp 1000|190, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff39001') }, max: { _id: ObjectId('4fd97a4905a35677eff391e8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')", lastmod: Timestamp 1000|192, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff393cf') }, max: { _id: ObjectId('4fd97a4905a35677eff395b6') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')", lastmod: Timestamp 1000|194, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3979b') }, max: { _id: ObjectId('4fd97a4a05a35677eff39985') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')", lastmod: Timestamp 1000|196, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, max: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')", lastmod: Timestamp 1000|198, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')", lastmod: Timestamp 1000|200, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')", lastmod: Timestamp 1000|202, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')", lastmod: Timestamp 1000|204, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, shard: "shard0001" }
m30999| Thu Jun 14 01:45:58 [Balancer] ----
m30999| Thu Jun 14 01:45:58 [Balancer] collection : test.mrShardedOut
m30999| Thu Jun 14 01:45:58 [Balancer] donor : 103 chunks on shard0000
m30999| Thu Jun 14 01:45:58 [Balancer] receiver : 103 chunks on shard0000
m30999| Thu Jun 14 01:45:58 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 4|64||000000000000000000000000 min: { a: 12.55217658236718 } max: { a: 16.11151483141404 }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30000| Thu Jun 14 01:45:58 [initandlisten] connection accepted from 127.0.0.1:39147 #20 (18 connections now open)
m30001| Thu Jun 14 01:45:58 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_12.55217658236718", configdb: "localhost:30000" }
m30001| Thu Jun 14 01:45:58 [conn5] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Thu Jun 14 01:45:58 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' acquired, ts : 4fd97a9632a28802daeee0be
m30001| Thu Jun 14 01:45:58 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:58-269", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652758871), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:45:58 [conn5] moveChunk request accepted at version 4|267||4fd97a3b0d2fef4d6a507be2
m30001| Thu Jun 14 01:45:58 [conn5] moveChunk number of documents: 749
m30000| Thu Jun 14 01:45:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 12.55217658236718 } -> { a: 16.11151483141404 }
m30001| Thu Jun 14 01:45:59 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shardKeyPattern: { a: 1 }, state: "steady", counts: { cloned: 749, clonedBytes: 797685, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Thu Jun 14 01:45:59 [conn5] moveChunk setting version to: 5|0||4fd97a3b0d2fef4d6a507be2
m30000| Thu Jun 14 01:45:59 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 12.55217658236718 } -> { a: 16.11151483141404 }
m30000| Thu Jun 14 01:45:59 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:59-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1339652759877), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 78, step4 of 5: 0, step5 of 5: 925 } }
m30001| Thu Jun 14 01:45:59 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shardKeyPattern: { a: 1 }, state: "done", counts: { cloned: 749, clonedBytes: 797685, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Thu Jun 14 01:45:59 [conn5] moveChunk updating self version to: 5|1||4fd97a3b0d2fef4d6a507be2 through { a: 16.11151483141404 } -> { a: 20.02617482801994 } for collection 'test.foo'
m30001| Thu Jun 14 01:45:59 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:59-270", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652759882), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, from: "shard0001", to: "shard0000" } }
m30001| Thu Jun 14 01:45:59 [conn5] forking for cleaning up chunk data
m30001| Thu Jun 14 01:45:59 [conn5] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1339652668:318525290' unlocked.
m30001| Thu Jun 14 01:45:59 [conn5] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-06-14T05:45:59-271", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:48976", time: new Date(1339652759883), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 1002, step5 of 6: 6, step6 of 6: 0 } }
m30001| Thu Jun 14 01:45:59 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_12.55217658236718", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:737289 w:180 reslen:37 1013ms
m30999| Thu Jun 14 01:45:59 [Balancer] moveChunk result: { ok: 1.0 }
m30999| Thu Jun 14 01:45:59 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 269 version: 5|1||4fd97a3b0d2fef4d6a507be2 based on: 4|267||4fd97a3b0d2fef4d6a507be2
m30999| Thu Jun 14 01:45:59 [Balancer] *** end of balancing round
m30999| Thu Jun 14 01:45:59 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30001| Thu Jun 14 01:45:59 [cleanupOldData] (start) waiting to cleanup test.foo from { a: 12.55217658236718 } -> { a: 16.11151483141404 } # cursors:1
m30001| Thu Jun 14 01:45:59 [cleanupOldData] (looping 1) waiting to cleanup test.foo from { a: 12.55217658236718 } -> { a: 16.11151483141404 } # cursors:1
m30001| Thu Jun 14 01:45:59 [cleanupOldData] cursors: 7764956908335060632
m30999| Thu Jun 14 01:46:02 [conn] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|0, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0000", shardHost: "localhost:30000" } 0x898a898
m30999| Thu Jun 14 01:46:02 [conn] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30999| Thu Jun 14 01:46:02 [conn] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), serverID: ObjectId('4fd97a3b0d2fef4d6a507be0'), shard: "shard0001", shardHost: "localhost:30001" } 0x898d068
m30999| Thu Jun 14 01:46:02 [conn] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ok: 1.0 }
m30001| Thu Jun 14 01:46:02 [cleanupOldData] moveChunk deleted: 749
m30999| Thu Jun 14 01:46:04 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:46:04 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:46:04 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97a9c0d2fef4d6a507beb" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a960d2fef4d6a507bea" } }
m30999| Thu Jun 14 01:46:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97a9c0d2fef4d6a507beb
m30999| Thu Jun 14 01:46:04 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:46:04 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:46:04 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:04 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:04 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:46:04 [Balancer] shard0000
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 4000|256, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 2.742599007396374 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_2.742599007396374", lastmod: Timestamp 4000|257, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 2.742599007396374 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|228, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 8.457858050974988 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_8.457858050974988", lastmod: Timestamp 4000|229, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 8.457858050974988 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] shard0001
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_20.02617482801994", lastmod: Timestamp 4000|214, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 20.02617482801994 }, max: { a: 22.72135361925398 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_22.72135361925398", lastmod: Timestamp 4000|215, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 22.72135361925398 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_34.95140019143683", lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_43.98990958864879", lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_51.90923851177054", lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_61.76919454003927", lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_70.06331619195872", lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_78.73686651492073", lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_87.41840730135154", lastmod: Timestamp 4000|196, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 87.41840730135154 }, max: { a: 89.89791872458619 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_89.89791872458619", lastmod: Timestamp 4000|197, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 89.89791872458619 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 4000|220, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 95.6069228239147 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_95.6069228239147", lastmod: Timestamp 4000|244, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 95.6069228239147 }, max: { a: 98.16826107499755 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_98.16826107499755", lastmod: Timestamp 4000|245, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 98.16826107499755 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_106.0311910436654", lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_114.9662096443472", lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_118.3157678917793", lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_127.4590140914801", lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_131.8115136015859", lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_141.1884883168546", lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_150.1357777689222", lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_153.684305048146", lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_163.3701742796004", lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 4000|226, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 170.2748683082939 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_170.2748683082939", lastmod: Timestamp 4000|227, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 170.2748683082939 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 4000|224, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 178.4802269484291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_178.4802269484291", lastmod: Timestamp 4000|225, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 178.4802269484291 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_184.9464054233513", lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_191.5307698720086", lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_198.5601903660538", lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 4000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 207.0875453859469 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_207.0875453859469", lastmod: Timestamp 4000|185, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 207.0875453859469 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 4000|204, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 212.8104857756458 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_212.8104857756458", lastmod: Timestamp 4000|205, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 212.8104857756458 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_220.5716558736682", lastmod: Timestamp 4000|248, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 220.5716558736682 }, max: { a: 222.9840106087572 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_222.9840106087572", lastmod: Timestamp 4000|249, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 222.9840106087572 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_228.7035403403385", lastmod: Timestamp 4000|234, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 228.7035403403385 }, max: { a: 231.249558963907 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_231.249558963907", lastmod: Timestamp 4000|235, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 231.249558963907 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 4000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 236.7690508533622 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_236.7690508533622", lastmod: Timestamp 4000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 236.7690508533622 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 4000|210, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 242.6421093833427 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_242.6421093833427", lastmod: Timestamp 4000|266, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 242.6421093833427 }, max: { a: 245.1924455307789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_245.1924455307789", lastmod: Timestamp 4000|267, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 245.1924455307789 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 4000|222, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 250.7993295308498 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_250.7993295308498", lastmod: Timestamp 4000|223, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 250.7993295308498 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_258.6206493525194", lastmod: Timestamp 4000|264, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 258.6206493525194 }, max: { a: 261.2663901230094 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_261.2663901230094", lastmod: Timestamp 4000|265, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 261.2663901230094 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_280.6827052136106", lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_289.7137301985317", lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 4000|252, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 302.7151830329477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_302.7151830329477", lastmod: Timestamp 4000|253, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 302.7151830329477 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 4000|186, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 312.3135459595852 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_312.3135459595852", lastmod: Timestamp 4000|187, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 312.3135459595852 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 4000|250, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 323.8729876956295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_323.8729876956295", lastmod: Timestamp 4000|251, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 323.8729876956295 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_331.4018789379612", lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_334.3168575448847", lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 4000|212, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 340.4008653065953 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_340.4008653065953", lastmod: Timestamp 4000|213, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 340.4008653065953 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_349.1094580993942", lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 4000|240, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 355.8076820303829 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_355.8076820303829", lastmod: Timestamp 4000|241, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 355.8076820303829 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 4000|232, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 360.7881657776425 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_360.7881657776425", lastmod: Timestamp 4000|233, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 360.7881657776425 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_373.3849373054079", lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 4000|242, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 380.9471963970786 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_380.9471963970786", lastmod: Timestamp 4000|243, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 380.9471963970786 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_387.7659705009871", lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 4000|208, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 395.6502767966605 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_395.6502767966605", lastmod: Timestamp 4000|258, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 395.6502767966605 }, max: { a: 398.1780778922134 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_398.1780778922134", lastmod: Timestamp 4000|259, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 398.1780778922134 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_404.1458625239371", lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_407.0796926580036", lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 4000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 413.7945438036655 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_413.7945438036655", lastmod: Timestamp 4000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 413.7945438036655 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_430.2130944220548", lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_437.040103636678", lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 4000|236, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 443.7079718299926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_443.7079718299926", lastmod: Timestamp 4000|237, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 443.7079718299926 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_451.8120411874291", lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_459.7315330482733", lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 4000|218, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 466.1607312365173 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_466.1607312365173", lastmod: Timestamp 4000|219, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 466.1607312365173 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_477.2807394020033", lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_480.2747403619077", lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_493.6797279933101", lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_501.5945768521381", lastmod: Timestamp 4000|246, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 501.5945768521381 }, max: { a: 503.8814286501491 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_503.8814286501491", lastmod: Timestamp 4000|247, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 503.8814286501491 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_510.639225969218", lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 4000|262, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 518.2463999492195 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_518.2463999492195", lastmod: Timestamp 4000|263, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 518.2463999492195 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_531.7597013546634", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_536.0462960134931", lastmod: Timestamp 4000|188, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 536.0462960134931 }, max: { a: 539.1281234038355 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_539.1281234038355", lastmod: Timestamp 4000|189, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 539.1281234038355 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_545.8257932837977", lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_548.9817180888258", lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 4000|260, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 554.5352736346487 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_554.5352736346487", lastmod: Timestamp 4000|261, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 554.5352736346487 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 4000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 560.838593433049 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_560.838593433049", lastmod: Timestamp 4000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 560.838593433049 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_567.3645636091692", lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_575.2102660145707", lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_584.4225320226172", lastmod: Timestamp 4000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 584.4225320226172 }, max: { a: 587.1685851091131 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_587.1685851091131", lastmod: Timestamp 4000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 587.1685851091131 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_594.3878051880898", lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_603.53104016638", lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 4000|238, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 617.9571577143996 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_617.9571577143996", lastmod: Timestamp 4000|239, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 617.9571577143996 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_628.1995001147562", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_632.4786347534061", lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_636.2085863336085", lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_644.4017960752651", lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_652.9401841699823", lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_660.6896106858891", lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_668.6362621623331", lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_672.2870891659105", lastmod: Timestamp 4000|206, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 672.2870891659105 }, max: { a: 675.1811603867598 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_675.1811603867598", lastmod: Timestamp 4000|207, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 675.1811603867598 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_681.3003030169281", lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_689.5707127489441", lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_698.4329238257609", lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 4000|198, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 717.0859810000978 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_717.0859810000978", lastmod: Timestamp 4000|199, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 717.0859810000978 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_725.5771489434317", lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_732.9348251743502", lastmod: Timestamp 4000|202, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 732.9348251743502 }, max: { a: 735.4457009121708 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_735.4457009121708", lastmod: Timestamp 4000|203, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 735.4457009121708 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 4000|192, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 741.3245176669844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_741.3245176669844", lastmod: Timestamp 4000|193, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 741.3245176669844 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_748.6872188241756", lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_756.637103632288", lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_765.2211241548246", lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_768.6399184840259", lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_777.6503149863191", lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_780.6933276463033", lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 4000|200, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 787.2181223195419 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_787.2181223195419", lastmod: Timestamp 4000|201, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 787.2181223195419 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_793.7120312511385", lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_802.4966878498034", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_807.4105833931693", lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_810.8918013325706", lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 4000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 824.2680954051706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_824.2680954051706", lastmod: Timestamp 4000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 824.2680954051706 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 4000|216, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 836.3608305125814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_836.3608305125814", lastmod: Timestamp 4000|217, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 836.3608305125814 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_843.8858257205128", lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_851.468355264985", lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_864.7746195980726", lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_877.8438233640235", lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_886.5207670748756", lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 4000|190, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 894.8106130543974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_894.8106130543974", lastmod: Timestamp 4000|191, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 894.8106130543974 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_901.6037051063506", lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 4000|254, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 907.8304631917699 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_907.8304631917699", lastmod: Timestamp 4000|255, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 907.8304631917699 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_914.1361338478089", lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_921.5853246168082", lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, shard: "shard0001" }
m30000| Thu Jun 14 01:46:04 [conn11] ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30000| end connection 127.0.0.1:60402 (17 connections now open)
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_951.1531632632295", lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_960.5824651536831", lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 4000|194, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 973.4895868865218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_973.4895868865218", lastmod: Timestamp 4000|195, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 973.4895868865218 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_980.667776515926", lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 4000|230, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 988.3510075746844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_988.3510075746844", lastmod: Timestamp 4000|231, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 988.3510075746844 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_994.7222740534528", lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] ----
m30999| Thu Jun 14 01:46:04 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:46:04 [Balancer] donor : 255 chunks on shard0001
m30999| Thu Jun 14 01:46:04 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:46:04 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:46:04 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:04 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:04 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:46:04 [Balancer] shard0000
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, max: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, max: { _id: ObjectId('4fd97a3d05a35677eff23246') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, max: { _id: ObjectId('4fd97a3d05a35677eff23611') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, max: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, max: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24176') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, max: { _id: ObjectId('4fd97a3d05a35677eff24541') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24727') }, max: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, max: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, max: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25295') }, max: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25663') }, max: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, max: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, max: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff26598') }, max: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26964') }, max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27105') }, max: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, max: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, max: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, max: { _id: ObjectId('4fd97a3f05a35677eff28226') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, max: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff287d7') }, max: { _id: ObjectId('4fd97a4005a35677eff289bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, max: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28f71') }, max: { _id: ObjectId('4fd97a4005a35677eff29159') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2933f') }, max: { _id: ObjectId('4fd97a4005a35677eff29523') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29708') }, max: { _id: ObjectId('4fd97a4005a35677eff298ed') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, max: { _id: ObjectId('4fd97a4005a35677eff29cba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, max: { _id: ObjectId('4fd97a4005a35677eff2a086') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, max: { _id: ObjectId('4fd97a4005a35677eff2a450') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a636') }, max: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, max: { _id: ObjectId('4fd97a4105a35677eff2abea') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2add0') }, max: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, max: { _id: ObjectId('4fd97a4105a35677eff2b387') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, max: { _id: ObjectId('4fd97a4105a35677eff2b757') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, max: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, max: { _id: ObjectId('4fd97a4205a35677eff2beee') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')", lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, max: { _id: ObjectId('4fd97a4205a35677eff2c687') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, max: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')", lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, max: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d008') }, max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, max: { _id: ObjectId('4fd97a4305a35677eff2d986') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, max: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, max: { _id: ObjectId('4fd97a4305a35677eff2e127') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, max: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, max: { _id: ObjectId('4fd97a4305a35677eff2f052') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f239') }, max: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f603') }, max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, max: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3016a') }, max: { _id: ObjectId('4fd97a4405a35677eff30351') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30537') }, max: { _id: ObjectId('4fd97a4405a35677eff30721') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30907') }, max: { _id: ObjectId('4fd97a4405a35677eff30aef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, max: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff310a7') }, max: { _id: ObjectId('4fd97a4405a35677eff3128e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31473') }, max: { _id: ObjectId('4fd97a4405a35677eff3165b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31841') }, max: { _id: ObjectId('4fd97a4405a35677eff31a28') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, max: { _id: ObjectId('4fd97a4405a35677eff31df3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31fda') }, max: { _id: ObjectId('4fd97a4405a35677eff321bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff323a4') }, max: { _id: ObjectId('4fd97a4405a35677eff3258c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32774') }, max: { _id: ObjectId('4fd97a4505a35677eff32958') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, max: { _id: ObjectId('4fd97a4505a35677eff32d23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, max: { _id: ObjectId('4fd97a4505a35677eff330f5') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff332d9') }, max: { _id: ObjectId('4fd97a4505a35677eff334c2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff336ab') }, max: { _id: ObjectId('4fd97a4505a35677eff33891') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33a77') }, max: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33e41') }, max: { _id: ObjectId('4fd97a4605a35677eff34026') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff3420d') }, max: { _id: ObjectId('4fd97a4605a35677eff343f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff345d9') }, max: { _id: ObjectId('4fd97a4605a35677eff347c1') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff349a9') }, max: { _id: ObjectId('4fd97a4705a35677eff34b90') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34d79') }, max: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35147') }, max: { _id: ObjectId('4fd97a4705a35677eff3532c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35511') }, max: { _id: ObjectId('4fd97a4705a35677eff356fa') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff358e1') }, max: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35cab') }, max: { _id: ObjectId('4fd97a4705a35677eff35e91') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3607a') }, max: { _id: ObjectId('4fd97a4805a35677eff3625f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36447') }, max: { _id: ObjectId('4fd97a4805a35677eff3662c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36814') }, max: { _id: ObjectId('4fd97a4805a35677eff369f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36be0') }, max: { _id: ObjectId('4fd97a4805a35677eff36dca') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36faf') }, max: { _id: ObjectId('4fd97a4805a35677eff37195') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3737a') }, max: { _id: ObjectId('4fd97a4805a35677eff37560') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37747') }, max: { _id: ObjectId('4fd97a4905a35677eff3792f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37b15') }, max: { _id: ObjectId('4fd97a4905a35677eff37cff') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, max: { _id: ObjectId('4fd97a4905a35677eff380d0') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff382b9') }, max: { _id: ObjectId('4fd97a4905a35677eff3849e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')", lastmod: Timestamp 1000|185, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38684') }, max: { _id: ObjectId('4fd97a4905a35677eff38869') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')", lastmod: Timestamp 1000|187, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, max: { _id: ObjectId('4fd97a4905a35677eff38c32') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')", lastmod: Timestamp 1000|189, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, max: { _id: ObjectId('4fd97a4905a35677eff39001') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')", lastmod: Timestamp 1000|191, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff391e8') }, max: { _id: ObjectId('4fd97a4905a35677eff393cf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')", lastmod: Timestamp 1000|193, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff395b6') }, max: { _id: ObjectId('4fd97a4905a35677eff3979b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')", lastmod: Timestamp 1000|195, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39985') }, max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')", lastmod: Timestamp 1000|197, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, max: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')", lastmod: Timestamp 1000|199, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')", lastmod: Timestamp 1000|201, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')", lastmod: Timestamp 1000|203, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')", lastmod: Timestamp 1000|205, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:04 [Balancer] shard0001
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, max: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, max: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23246') }, max: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23611') }, max: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24176') }, max: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24541') }, max: { _id: ObjectId('4fd97a3d05a35677eff24727') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, max: { _id: ObjectId('4fd97a3e05a35677eff25295') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, max: { _id: ObjectId('4fd97a3e05a35677eff25663') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, max: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, max: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, max: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, max: { _id: ObjectId('4fd97a3e05a35677eff26598') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, max: { _id: ObjectId('4fd97a3f05a35677eff26964') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, max: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27105') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, max: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, max: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, max: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff28226') }, max: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, max: { _id: ObjectId('4fd97a4005a35677eff287d7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff289bf') }, max: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, max: { _id: ObjectId('4fd97a4005a35677eff28f71') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29159') }, max: { _id: ObjectId('4fd97a4005a35677eff2933f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29523') }, max: { _id: ObjectId('4fd97a4005a35677eff29708') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff298ed') }, max: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29cba') }, max: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a086') }, max: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a450') }, max: { _id: ObjectId('4fd97a4105a35677eff2a636') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, max: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2abea') }, max: { _id: ObjectId('4fd97a4105a35677eff2add0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b387') }, max: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b757') }, max: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, max: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2beee') }, max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')", lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c687') }, max: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, max: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')", lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, max: { _id: ObjectId('4fd97a4205a35677eff2d008') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')", lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')", lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2d986') }, max: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')", lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, max: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e127') }, max: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')", lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')", lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')", lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f052') }, max: { _id: ObjectId('4fd97a4305a35677eff2f239') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, max: { _id: ObjectId('4fd97a4305a35677eff2f603') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')", lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, max: { _id: ObjectId('4fd97a4405a35677eff3016a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30351') }, max: { _id: ObjectId('4fd97a4405a35677eff30537') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')", lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30721') }, max: { _id: ObjectId('4fd97a4405a35677eff30907') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30aef') }, max: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')", lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, max: { _id: ObjectId('4fd97a4405a35677eff310a7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3128e') }, max: { _id: ObjectId('4fd97a4405a35677eff31473') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3165b') }, max: { _id: ObjectId('4fd97a4405a35677eff31841') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31a28') }, max: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31df3') }, max: { _id: ObjectId('4fd97a4405a35677eff31fda') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff321bf') }, max: { _id: ObjectId('4fd97a4405a35677eff323a4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3258c') }, max: { _id: ObjectId('4fd97a4505a35677eff32774') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32958') }, max: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32d23') }, max: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff330f5') }, max: { _id: ObjectId('4fd97a4505a35677eff332d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff334c2') }, max: { _id: ObjectId('4fd97a4505a35677eff336ab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff33891') }, max: { _id: ObjectId('4fd97a4605a35677eff33a77') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')", lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, max: { _id: ObjectId('4fd97a4605a35677eff33e41') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff34026') }, max: { _id: ObjectId('4fd97a4605a35677eff3420d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff343f3') }, max: { _id: ObjectId('4fd97a4605a35677eff345d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')", lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff347c1') }, max: { _id: ObjectId('4fd97a4605a35677eff349a9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34b90') }, max: { _id: ObjectId('4fd97a4705a35677eff34d79') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, max: { _id: ObjectId('4fd97a4705a35677eff35147') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff3532c') }, max: { _id: ObjectId('4fd97a4705a35677eff35511') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff356fa') }, max: { _id: ObjectId('4fd97a4705a35677eff358e1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, max: { _id: ObjectId('4fd97a4705a35677eff35cab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35e91') }, max: { _id: ObjectId('4fd97a4805a35677eff3607a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3625f') }, max: { _id: ObjectId('4fd97a4805a35677eff36447') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3662c') }, max: { _id: ObjectId('4fd97a4805a35677eff36814') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff369f9') }, max: { _id: ObjectId('4fd97a4805a35677eff36be0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36dca') }, max: { _id: ObjectId('4fd97a4805a35677eff36faf') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37195') }, max: { _id: ObjectId('4fd97a4805a35677eff3737a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37560') }, max: { _id: ObjectId('4fd97a4905a35677eff37747') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3792f') }, max: { _id: ObjectId('4fd97a4905a35677eff37b15') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37cff') }, max: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff380d0') }, max: { _id: ObjectId('4fd97a4905a35677eff382b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3849e') }, max: { _id: ObjectId('4fd97a4905a35677eff38684') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')", lastmod: Timestamp 1000|186, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38869') }, max: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')", lastmod: Timestamp 1000|188, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38c32') }, max: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')", lastmod: Timestamp 1000|190, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff39001') }, max: { _id: ObjectId('4fd97a4905a35677eff391e8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')", lastmod: Timestamp 1000|192, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff393cf') }, max: { _id: ObjectId('4fd97a4905a35677eff395b6') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')", lastmod: Timestamp 1000|194, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3979b') }, max: { _id: ObjectId('4fd97a4a05a35677eff39985') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')", lastmod: Timestamp 1000|196, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, max: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')", lastmod: Timestamp 1000|198, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')", lastmod: Timestamp 1000|200, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')", lastmod: Timestamp 1000|202, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')", lastmod: Timestamp 1000|204, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:04 [Balancer] ----
m30999| Thu Jun 14 01:46:04 [Balancer] collection : test.mrShardedOut
m30999| Thu Jun 14 01:46:04 [Balancer] donor : 103 chunks on shard0000
m30999| Thu Jun 14 01:46:04 [Balancer] receiver : 103 chunks on shard0000
m30999| Thu Jun 14 01:46:04 [Balancer] Assertion: 10320:BSONElement: bad type 60
m30999| 0x84f514a 0x8126495 0x83f3537 0x811ddd3 0x81f8992 0x835a481 0x82c3073 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x9d4542 0x40db6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEv+0x1b3) [0x811ddd3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo7BSONObj13extractFieldsERKS0_b+0x132) [0x81f8992]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo12ChunkManager9findChunkERKNS_7BSONObjE+0x1e1) [0x835a481]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x613) [0x82c3073]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x9d4542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x40db6e]
m30999| Thu Jun 14 01:46:04 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:46:04 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:46:04 [Balancer] caught exception while doing balance: BSONElement: bad type 60
m30999| Thu Jun 14 01:46:04 [Balancer] *** End of balancing round
Count is 200000
m30001| Thu Jun 14 01:46:07 [conn3] CMD: drop test.tmp.mr.foo_2_inc
m30001| Thu Jun 14 01:46:07 [conn3] build index test.tmp.mr.foo_2_inc { 0: 1 }
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mr.foo_2_inc
m30000| Thu Jun 14 01:46:07 [conn7] build index test.tmp.mr.foo_2_inc { 0: 1 }
m30000| Thu Jun 14 01:46:07 [conn7] build index done. scanned 0 total records. 0.014 secs
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mr.foo_2
m30000| Thu Jun 14 01:46:07 [conn7] build index test.tmp.mr.foo_2 { _id: 1 }
m30000| Thu Jun 14 01:46:07 [conn7] build index done. scanned 0 total records. 0 secs
m30001| Thu Jun 14 01:46:07 [conn3] build index done. scanned 0 total records. 0.009 secs
m30001| Thu Jun 14 01:46:07 [conn3] CMD: drop test.tmp.mr.foo_2
m30001| Thu Jun 14 01:46:07 [conn3] build index test.tmp.mr.foo_2 { _id: 1 }
m30001| Thu Jun 14 01:46:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mrs.foo_1339652767_1
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mr.foo_2
m30000| Thu Jun 14 01:46:07 [conn7] request split points lookup for chunk test.tmp.mrs.foo_1339652767_1 { : MinKey } -->> { : MaxKey }
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mr.foo_2
m30000| Thu Jun 14 01:46:07 [conn7] CMD: drop test.tmp.mr.foo_2_inc
m30000| Thu Jun 14 01:46:07 [conn7] command test.$cmd command: { mapreduce: "foo", map: function map2() {
m30000| emit(this._id, {count:1, y:this.y});
m30000| }, reduce: function reduce2(key, values) {
m30000| return values[0];
m30000| }, out: "tmp.mrs.foo_1339652767_1", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 3184 locks(micros) W:6812 r:385996 w:4171437 reslen:314 556ms
m30001| Thu Jun 14 01:46:10 [conn3] 29500/196848 14%
m30001| Thu Jun 14 01:46:13 [conn3] 61000/196848 30%
m30001| Thu Jun 14 01:46:16 [conn3] 93300/196848 47%
m30001| Thu Jun 14 01:46:19 [conn3] 123300/196848 62%
m30001| Thu Jun 14 01:46:22 [conn3] 152800/196848 77%
m30001| Thu Jun 14 01:46:25 [conn3] 179400/196848 91%
m30001| Thu Jun 14 01:46:28 [clientcursormon] mem (MB) res:698 virt:1285 mapped:1023
m30999| Thu Jun 14 01:46:28 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:46:28 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:46:28 [clientcursormon] mem (MB) res:172 virt:362 mapped:160
m30001| Thu Jun 14 01:46:30 [conn3] 41000/196848 20%
m30001| Thu Jun 14 01:46:33 [conn3] 103700/196848 52%
m30999| Thu Jun 14 01:46:34 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:46:34 BackgroundJob starting: ConnectBG
m30000| Thu Jun 14 01:46:34 [initandlisten] connection accepted from 127.0.0.1:39148 #21 (18 connections now open)
m30999| Thu Jun 14 01:46:34 [Balancer] connected connection!
m30999| Thu Jun 14 01:46:34 [Balancer] Refreshing MaxChunkSize: 1
m30999| Thu Jun 14 01:46:34 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:46:34 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97aba0d2fef4d6a507bec" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a9c0d2fef4d6a507beb" } }
m30999| Thu Jun 14 01:46:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97aba0d2fef4d6a507bec
m30999| Thu Jun 14 01:46:34 [Balancer] *** start balancing round
m30999| Thu Jun 14 01:46:34 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:46:34 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:34 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:34 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:46:34 [Balancer] shard0000
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 4000|256, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 2.742599007396374 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_2.742599007396374", lastmod: Timestamp 4000|257, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 2.742599007396374 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|228, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 8.457858050974988 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_8.457858050974988", lastmod: Timestamp 4000|229, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 8.457858050974988 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] shard0001
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_20.02617482801994", lastmod: Timestamp 4000|214, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 20.02617482801994 }, max: { a: 22.72135361925398 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_22.72135361925398", lastmod: Timestamp 4000|215, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 22.72135361925398 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_34.95140019143683", lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_43.98990958864879", lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_51.90923851177054", lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_61.76919454003927", lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_70.06331619195872", lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_78.73686651492073", lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_87.41840730135154", lastmod: Timestamp 4000|196, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 87.41840730135154 }, max: { a: 89.89791872458619 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_89.89791872458619", lastmod: Timestamp 4000|197, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 89.89791872458619 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 4000|220, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 95.6069228239147 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_95.6069228239147", lastmod: Timestamp 4000|244, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 95.6069228239147 }, max: { a: 98.16826107499755 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_98.16826107499755", lastmod: Timestamp 4000|245, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 98.16826107499755 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_106.0311910436654", lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_114.9662096443472", lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_118.3157678917793", lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_127.4590140914801", lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_131.8115136015859", lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_141.1884883168546", lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_150.1357777689222", lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_153.684305048146", lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_163.3701742796004", lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 4000|226, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 170.2748683082939 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_170.2748683082939", lastmod: Timestamp 4000|227, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 170.2748683082939 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 4000|224, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 178.4802269484291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_178.4802269484291", lastmod: Timestamp 4000|225, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 178.4802269484291 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_184.9464054233513", lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_191.5307698720086", lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_198.5601903660538", lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 4000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 207.0875453859469 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_207.0875453859469", lastmod: Timestamp 4000|185, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 207.0875453859469 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 4000|204, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 212.8104857756458 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_212.8104857756458", lastmod: Timestamp 4000|205, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 212.8104857756458 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_220.5716558736682", lastmod: Timestamp 4000|248, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 220.5716558736682 }, max: { a: 222.9840106087572 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_222.9840106087572", lastmod: Timestamp 4000|249, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 222.9840106087572 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_228.7035403403385", lastmod: Timestamp 4000|234, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 228.7035403403385 }, max: { a: 231.249558963907 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_231.249558963907", lastmod: Timestamp 4000|235, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 231.249558963907 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 4000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 236.7690508533622 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_236.7690508533622", lastmod: Timestamp 4000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 236.7690508533622 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 4000|210, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 242.6421093833427 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_242.6421093833427", lastmod: Timestamp 4000|266, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 242.6421093833427 }, max: { a: 245.1924455307789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_245.1924455307789", lastmod: Timestamp 4000|267, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 245.1924455307789 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 4000|222, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 250.7993295308498 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_250.7993295308498", lastmod: Timestamp 4000|223, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 250.7993295308498 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_258.6206493525194", lastmod: Timestamp 4000|264, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 258.6206493525194 }, max: { a: 261.2663901230094 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_261.2663901230094", lastmod: Timestamp 4000|265, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 261.2663901230094 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_280.6827052136106", lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_289.7137301985317", lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 4000|252, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 302.7151830329477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_302.7151830329477", lastmod: Timestamp 4000|253, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 302.7151830329477 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 4000|186, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 312.3135459595852 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_312.3135459595852", lastmod: Timestamp 4000|187, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 312.3135459595852 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 4000|250, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 323.8729876956295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_323.8729876956295", lastmod: Timestamp 4000|251, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 323.8729876956295 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_331.4018789379612", lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_334.3168575448847", lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 4000|212, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 340.4008653065953 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_340.4008653065953", lastmod: Timestamp 4000|213, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 340.4008653065953 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_349.1094580993942", lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 4000|240, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 355.8076820303829 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_355.8076820303829", lastmod: Timestamp 4000|241, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 355.8076820303829 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 4000|232, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 360.7881657776425 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_360.7881657776425", lastmod: Timestamp 4000|233, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 360.7881657776425 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_373.3849373054079", lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 4000|242, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 380.9471963970786 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_380.9471963970786", lastmod: Timestamp 4000|243, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 380.9471963970786 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_387.7659705009871", lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 4000|208, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 395.6502767966605 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_395.6502767966605", lastmod: Timestamp 4000|258, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 395.6502767966605 }, max: { a: 398.1780778922134 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_398.1780778922134", lastmod: Timestamp 4000|259, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 398.1780778922134 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_404.1458625239371", lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_407.0796926580036", lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 4000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 413.7945438036655 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_413.7945438036655", lastmod: Timestamp 4000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 413.7945438036655 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_430.2130944220548", lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_437.040103636678", lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 4000|236, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 443.7079718299926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_443.7079718299926", lastmod: Timestamp 4000|237, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 443.7079718299926 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_451.8120411874291", lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_459.7315330482733", lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 4000|218, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 466.1607312365173 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_466.1607312365173", lastmod: Timestamp 4000|219, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 466.1607312365173 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_477.2807394020033", lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_480.2747403619077", lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_493.6797279933101", lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_501.5945768521381", lastmod: Timestamp 4000|246, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 501.5945768521381 }, max: { a: 503.8814286501491 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_503.8814286501491", lastmod: Timestamp 4000|247, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 503.8814286501491 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_510.639225969218", lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 4000|262, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 518.2463999492195 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_518.2463999492195", lastmod: Timestamp 4000|263, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 518.2463999492195 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_531.7597013546634", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_536.0462960134931", lastmod: Timestamp 4000|188, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 536.0462960134931 }, max: { a: 539.1281234038355 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_539.1281234038355", lastmod: Timestamp 4000|189, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 539.1281234038355 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_545.8257932837977", lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_548.9817180888258", lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 4000|260, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 554.5352736346487 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_554.5352736346487", lastmod: Timestamp 4000|261, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 554.5352736346487 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 4000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 560.838593433049 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_560.838593433049", lastmod: Timestamp 4000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 560.838593433049 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_567.3645636091692", lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_575.2102660145707", lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_584.4225320226172", lastmod: Timestamp 4000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 584.4225320226172 }, max: { a: 587.1685851091131 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_587.1685851091131", lastmod: Timestamp 4000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 587.1685851091131 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_594.3878051880898", lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_603.53104016638", lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 4000|238, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 617.9571577143996 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_617.9571577143996", lastmod: Timestamp 4000|239, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 617.9571577143996 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_628.1995001147562", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_632.4786347534061", lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_636.2085863336085", lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_644.4017960752651", lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_652.9401841699823", lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_660.6896106858891", lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_668.6362621623331", lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_672.2870891659105", lastmod: Timestamp 4000|206, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 672.2870891659105 }, max: { a: 675.1811603867598 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_675.1811603867598", lastmod: Timestamp 4000|207, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 675.1811603867598 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_681.3003030169281", lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_689.5707127489441", lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_698.4329238257609", lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 4000|198, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 717.0859810000978 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_717.0859810000978", lastmod: Timestamp 4000|199, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 717.0859810000978 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_725.5771489434317", lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_732.9348251743502", lastmod: Timestamp 4000|202, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 732.9348251743502 }, max: { a: 735.4457009121708 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_735.4457009121708", lastmod: Timestamp 4000|203, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 735.4457009121708 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 4000|192, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 741.3245176669844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_741.3245176669844", lastmod: Timestamp 4000|193, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 741.3245176669844 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_748.6872188241756", lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_756.637103632288", lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_765.2211241548246", lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_768.6399184840259", lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_777.6503149863191", lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_780.6933276463033", lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 4000|200, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 787.2181223195419 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_787.2181223195419", lastmod: Timestamp 4000|201, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 787.2181223195419 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_793.7120312511385", lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_802.4966878498034", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_807.4105833931693", lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_810.8918013325706", lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 4000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 824.2680954051706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_824.2680954051706", lastmod: Timestamp 4000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 824.2680954051706 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 4000|216, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 836.3608305125814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_836.3608305125814", lastmod: Timestamp 4000|217, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 836.3608305125814 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_843.8858257205128", lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_851.468355264985", lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_864.7746195980726", lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_877.8438233640235", lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_886.5207670748756", lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 4000|190, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 894.8106130543974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_894.8106130543974", lastmod: Timestamp 4000|191, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 894.8106130543974 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_901.6037051063506", lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 4000|254, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 907.8304631917699 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_907.8304631917699", lastmod: Timestamp 4000|255, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 907.8304631917699 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_914.1361338478089", lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_921.5853246168082", lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_951.1531632632295", lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_960.5824651536831", lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 4000|194, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 973.4895868865218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_973.4895868865218", lastmod: Timestamp 4000|195, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 973.4895868865218 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_980.667776515926", lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 4000|230, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 988.3510075746844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_988.3510075746844", lastmod: Timestamp 4000|231, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 988.3510075746844 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_994.7222740534528", lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] ----
m30999| Thu Jun 14 01:46:34 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:46:34 [Balancer] donor : 255 chunks on shard0001
m30999| Thu Jun 14 01:46:34 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:46:34 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:46:34 [Balancer] shard0000 maxSize: 0 currSize: 160 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:34 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:46:34 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:46:34 [Balancer] shard0000
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, max: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, max: { _id: ObjectId('4fd97a3d05a35677eff23246') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, max: { _id: ObjectId('4fd97a3d05a35677eff23611') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, max: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, max: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24176') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, max: { _id: ObjectId('4fd97a3d05a35677eff24541') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24727') }, max: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, shard: "shard0000" }
m30000| Thu Jun 14 01:46:34 [conn20] end connection 127.0.0.1:39147 (17 connections now open)
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, max: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, max: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25295') }, max: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25663') }, max: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, max: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, max: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff26598') }, max: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26964') }, max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27105') }, max: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, max: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, max: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, max: { _id: ObjectId('4fd97a3f05a35677eff28226') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, max: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff287d7') }, max: { _id: ObjectId('4fd97a4005a35677eff289bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, max: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28f71') }, max: { _id: ObjectId('4fd97a4005a35677eff29159') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2933f') }, max: { _id: ObjectId('4fd97a4005a35677eff29523') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29708') }, max: { _id: ObjectId('4fd97a4005a35677eff298ed') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, max: { _id: ObjectId('4fd97a4005a35677eff29cba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, max: { _id: ObjectId('4fd97a4005a35677eff2a086') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, max: { _id: ObjectId('4fd97a4005a35677eff2a450') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a636') }, max: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, max: { _id: ObjectId('4fd97a4105a35677eff2abea') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2add0') }, max: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, max: { _id: ObjectId('4fd97a4105a35677eff2b387') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, max: { _id: ObjectId('4fd97a4105a35677eff2b757') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, max: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, max: { _id: ObjectId('4fd97a4205a35677eff2beee') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')", lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, max: { _id: ObjectId('4fd97a4205a35677eff2c687') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, max: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')", lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, max: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d008') }, max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, max: { _id: ObjectId('4fd97a4305a35677eff2d986') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, max: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, max: { _id: ObjectId('4fd97a4305a35677eff2e127') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, max: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, max: { _id: ObjectId('4fd97a4305a35677eff2f052') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f239') }, max: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f603') }, max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, max: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3016a') }, max: { _id: ObjectId('4fd97a4405a35677eff30351') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30537') }, max: { _id: ObjectId('4fd97a4405a35677eff30721') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30907') }, max: { _id: ObjectId('4fd97a4405a35677eff30aef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, max: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff310a7') }, max: { _id: ObjectId('4fd97a4405a35677eff3128e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31473') }, max: { _id: ObjectId('4fd97a4405a35677eff3165b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31841') }, max: { _id: ObjectId('4fd97a4405a35677eff31a28') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, max: { _id: ObjectId('4fd97a4405a35677eff31df3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31fda') }, max: { _id: ObjectId('4fd97a4405a35677eff321bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff323a4') }, max: { _id: ObjectId('4fd97a4405a35677eff3258c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32774') }, max: { _id: ObjectId('4fd97a4505a35677eff32958') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, max: { _id: ObjectId('4fd97a4505a35677eff32d23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, max: { _id: ObjectId('4fd97a4505a35677eff330f5') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff332d9') }, max: { _id: ObjectId('4fd97a4505a35677eff334c2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff336ab') }, max: { _id: ObjectId('4fd97a4505a35677eff33891') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33a77') }, max: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33e41') }, max: { _id: ObjectId('4fd97a4605a35677eff34026') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff3420d') }, max: { _id: ObjectId('4fd97a4605a35677eff343f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff345d9') }, max: { _id: ObjectId('4fd97a4605a35677eff347c1') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff349a9') }, max: { _id: ObjectId('4fd97a4705a35677eff34b90') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34d79') }, max: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35147') }, max: { _id: ObjectId('4fd97a4705a35677eff3532c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35511') }, max: { _id: ObjectId('4fd97a4705a35677eff356fa') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff358e1') }, max: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35cab') }, max: { _id: ObjectId('4fd97a4705a35677eff35e91') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3607a') }, max: { _id: ObjectId('4fd97a4805a35677eff3625f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36447') }, max: { _id: ObjectId('4fd97a4805a35677eff3662c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36814') }, max: { _id: ObjectId('4fd97a4805a35677eff369f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36be0') }, max: { _id: ObjectId('4fd97a4805a35677eff36dca') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36faf') }, max: { _id: ObjectId('4fd97a4805a35677eff37195') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3737a') }, max: { _id: ObjectId('4fd97a4805a35677eff37560') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37747') }, max: { _id: ObjectId('4fd97a4905a35677eff3792f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37b15') }, max: { _id: ObjectId('4fd97a4905a35677eff37cff') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, max: { _id: ObjectId('4fd97a4905a35677eff380d0') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff382b9') }, max: { _id: ObjectId('4fd97a4905a35677eff3849e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')", lastmod: Timestamp 1000|185, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38684') }, max: { _id: ObjectId('4fd97a4905a35677eff38869') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')", lastmod: Timestamp 1000|187, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, max: { _id: ObjectId('4fd97a4905a35677eff38c32') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')", lastmod: Timestamp 1000|189, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, max: { _id: ObjectId('4fd97a4905a35677eff39001') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')", lastmod: Timestamp 1000|191, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff391e8') }, max: { _id: ObjectId('4fd97a4905a35677eff393cf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')", lastmod: Timestamp 1000|193, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff395b6') }, max: { _id: ObjectId('4fd97a4905a35677eff3979b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')", lastmod: Timestamp 1000|195, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39985') }, max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')", lastmod: Timestamp 1000|197, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, max: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')", lastmod: Timestamp 1000|199, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')", lastmod: Timestamp 1000|201, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')", lastmod: Timestamp 1000|203, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')", lastmod: Timestamp 1000|205, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:46:34 [Balancer] shard0001
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, max: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, max: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23246') }, max: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23611') }, max: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24176') }, max: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24541') }, max: { _id: ObjectId('4fd97a3d05a35677eff24727') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, max: { _id: ObjectId('4fd97a3e05a35677eff25295') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, max: { _id: ObjectId('4fd97a3e05a35677eff25663') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, max: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, max: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, max: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, max: { _id: ObjectId('4fd97a3e05a35677eff26598') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, max: { _id: ObjectId('4fd97a3f05a35677eff26964') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, max: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27105') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, max: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, max: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, max: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff28226') }, max: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, max: { _id: ObjectId('4fd97a4005a35677eff287d7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff289bf') }, max: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, max: { _id: ObjectId('4fd97a4005a35677eff28f71') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29159') }, max: { _id: ObjectId('4fd97a4005a35677eff2933f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29523') }, max: { _id: ObjectId('4fd97a4005a35677eff29708') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff298ed') }, max: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29cba') }, max: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a086') }, max: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a450') }, max: { _id: ObjectId('4fd97a4105a35677eff2a636') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, max: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2abea') }, max: { _id: ObjectId('4fd97a4105a35677eff2add0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b387') }, max: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b757') }, max: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, max: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2beee') }, max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')", lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c687') }, max: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, max: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')", lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, max: { _id: ObjectId('4fd97a4205a35677eff2d008') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')", lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')", lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2d986') }, max: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')", lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, max: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e127') }, max: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')", lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')", lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')", lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f052') }, max: { _id: ObjectId('4fd97a4305a35677eff2f239') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, max: { _id: ObjectId('4fd97a4305a35677eff2f603') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')", lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, max: { _id: ObjectId('4fd97a4405a35677eff3016a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30351') }, max: { _id: ObjectId('4fd97a4405a35677eff30537') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')", lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30721') }, max: { _id: ObjectId('4fd97a4405a35677eff30907') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30aef') }, max: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')", lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, max: { _id: ObjectId('4fd97a4405a35677eff310a7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3128e') }, max: { _id: ObjectId('4fd97a4405a35677eff31473') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3165b') }, max: { _id: ObjectId('4fd97a4405a35677eff31841') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31a28') }, max: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31df3') }, max: { _id: ObjectId('4fd97a4405a35677eff31fda') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff321bf') }, max: { _id: ObjectId('4fd97a4405a35677eff323a4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3258c') }, max: { _id: ObjectId('4fd97a4505a35677eff32774') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32958') }, max: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32d23') }, max: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff330f5') }, max: { _id: ObjectId('4fd97a4505a35677eff332d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff334c2') }, max: { _id: ObjectId('4fd97a4505a35677eff336ab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff33891') }, max: { _id: ObjectId('4fd97a4605a35677eff33a77') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')", lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, max: { _id: ObjectId('4fd97a4605a35677eff33e41') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff34026') }, max: { _id: ObjectId('4fd97a4605a35677eff3420d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff343f3') }, max: { _id: ObjectId('4fd97a4605a35677eff345d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')", lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff347c1') }, max: { _id: ObjectId('4fd97a4605a35677eff349a9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34b90') }, max: { _id: ObjectId('4fd97a4705a35677eff34d79') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, max: { _id: ObjectId('4fd97a4705a35677eff35147') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff3532c') }, max: { _id: ObjectId('4fd97a4705a35677eff35511') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff356fa') }, max: { _id: ObjectId('4fd97a4705a35677eff358e1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, max: { _id: ObjectId('4fd97a4705a35677eff35cab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35e91') }, max: { _id: ObjectId('4fd97a4805a35677eff3607a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3625f') }, max: { _id: ObjectId('4fd97a4805a35677eff36447') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3662c') }, max: { _id: ObjectId('4fd97a4805a35677eff36814') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff369f9') }, max: { _id: ObjectId('4fd97a4805a35677eff36be0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36dca') }, max: { _id: ObjectId('4fd97a4805a35677eff36faf') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37195') }, max: { _id: ObjectId('4fd97a4805a35677eff3737a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37560') }, max: { _id: ObjectId('4fd97a4905a35677eff37747') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3792f') }, max: { _id: ObjectId('4fd97a4905a35677eff37b15') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37cff') }, max: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff380d0') }, max: { _id: ObjectId('4fd97a4905a35677eff382b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3849e') }, max: { _id: ObjectId('4fd97a4905a35677eff38684') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')", lastmod: Timestamp 1000|186, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38869') }, max: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')", lastmod: Timestamp 1000|188, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38c32') }, max: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')", lastmod: Timestamp 1000|190, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff39001') }, max: { _id: ObjectId('4fd97a4905a35677eff391e8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')", lastmod: Timestamp 1000|192, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff393cf') }, max: { _id: ObjectId('4fd97a4905a35677eff395b6') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')", lastmod: Timestamp 1000|194, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3979b') }, max: { _id: ObjectId('4fd97a4a05a35677eff39985') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')", lastmod: Timestamp 1000|196, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, max: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')", lastmod: Timestamp 1000|198, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')", lastmod: Timestamp 1000|200, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')", lastmod: Timestamp 1000|202, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')", lastmod: Timestamp 1000|204, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, shard: "shard0001" }
m30999| Thu Jun 14 01:46:34 [Balancer] ----
m30999| Thu Jun 14 01:46:34 [Balancer] collection : test.mrShardedOut
m30999| Thu Jun 14 01:46:34 [Balancer] donor : 103 chunks on shard0000
m30999| Thu Jun 14 01:46:34 [Balancer] receiver : 103 chunks on shard0000
m30999| Thu Jun 14 01:46:34 [Balancer] Assertion: 10320:BSONElement: bad type 115
m30999| 0x84f514a 0x8126495 0x83f3537 0x811ddd3 0x81f8992 0x835a481 0x82c3073 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x9d4542 0x40db6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2a) [0x84f514a]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8126495]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc7) [0x83f3537]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo11BSONElement4sizeEv+0x1b3) [0x811ddd3]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo7BSONObj13extractFieldsERKS0_b+0x132) [0x81f8992]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo12ChunkManager9findChunkERKNS_7BSONObjE+0x1e1) [0x835a481]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x613) [0x82c3073]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c) [0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0) [0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0 [0x9d4542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x40db6e]
m30999| Thu Jun 14 01:46:34 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' unlocked.
m30999| Thu Jun 14 01:46:34 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Thu Jun 14 01:46:34 [Balancer] caught exception while doing balance: BSONElement: bad type 115
m30999| Thu Jun 14 01:46:34 [Balancer] *** End of balancing round
m30001| Thu Jun 14 01:46:36 [conn3] 143700/196848 73%
m30001| Thu Jun 14 01:46:39 [conn3] 181200/196848 92%
m30001| Thu Jun 14 01:46:40 [conn3] CMD: drop test.tmp.mrs.foo_1339652767_1
m30001| Thu Jun 14 01:46:40 [conn3] CMD: drop test.tmp.mr.foo_2
m30001| Thu Jun 14 01:46:40 [conn3] request split points lookup for chunk test.tmp.mrs.foo_1339652767_1 { : MinKey } -->> { : MaxKey }
m30001| Thu Jun 14 01:46:40 [conn3] warning: Finding the split vector for test.tmp.mrs.foo_1339652767_1 over { _id: 1 } keyCount: 483 numSplits: 406 lookedAt: 344 took 348ms
m30001| Thu Jun 14 01:46:40 [conn3] command admin.$cmd command: { splitVector: "test.tmp.mrs.foo_1339652767_1", keyPattern: { _id: 1 }, maxChunkSizeBytes: 1048576 } ntoreturn:1 keyUpdates:0 locks(micros) r:349096 reslen:10905 349ms
m30001| Thu Jun 14 01:46:40 [conn3] CMD: drop test.tmp.mr.foo_2
m30001| Thu Jun 14 01:46:40 [conn3] CMD: drop test.tmp.mr.foo_2_inc
m30001| Thu Jun 14 01:46:40 [conn3] command test.$cmd command: { mapreduce: "foo", map: function map2() {
m30001| emit(this._id, {count:1, y:this.y});
m30001| }, reduce: function reduce2(key, values) {
m30001| return values[0];
m30001| }, out: "tmp.mrs.foo_1339652767_1", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 198817 locks(micros) W:35730 r:23063792 w:67121376 reslen:11016 33365ms
m30001| Thu Jun 14 01:46:40 [conn3] CMD: drop test.tmp.mr.foo_3
m30001| Thu Jun 14 01:46:40 [conn3] build index test.tmp.mr.foo_3 { _id: 1 }
m30001| Thu Jun 14 01:46:40 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Thu Jun 14 01:46:40 [conn7] CMD: drop test.tmp.mr.foo_3
m30000| Thu Jun 14 01:46:40 [conn7] build index test.tmp.mr.foo_3 { _id: 1 }
m30000| Thu Jun 14 01:46:40 [conn7] build index done. scanned 0 total records. 0 secs
m30999| Thu Jun 14 01:46:40 [conn] MR with sharded output, NS=test.mrShardedOut
m30999| Thu Jun 14 01:46:40 [conn] created new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Thu Jun 14 01:46:40 [conn] about to acquire distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:conn:512508528",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:46:40 2012" },
m30999| "why" : "mr-post-process",
m30999| "ts" : { "$oid" : "4fd97ac00d2fef4d6a507bed" } }
m30999| { "_id" : "test.mrShardedOut",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97a640d2fef4d6a507be8" } }
m30999| Thu Jun 14 01:46:40 [conn] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97ac00d2fef4d6a507bed
m30000| Thu Jun 14 01:46:41 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.4, filling with zeroes...
m30000| Thu Jun 14 01:46:50 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.4, size: 256MB, took 8.361 secs
m30001| Thu Jun 14 01:46:50 [conn3] CMD: drop test.mrShardedOut
m30001| Thu Jun 14 01:46:50 [conn3] CMD: drop test.tmp.mr.foo_3
m30001| Thu Jun 14 01:46:50 [conn3] CMD: drop test.tmp.mr.foo_3
m30001| Thu Jun 14 01:46:50 [conn3] CMD: drop test.tmp.mr.foo_3
m30001| WARNING: mongod wrote null bytes to output
m30000| Thu Jun 14 01:46:56 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.5, filling with zeroes...
m30999| Thu Jun 14 01:46:58 [LockPinger] cluster localhost:30000 pinged successfully at Thu Jun 14 01:46:58 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1339652667:1804289383', sleeping for 30000ms
m30000| Thu Jun 14 01:46:58 [conn8] update config.lockpings query: { _id: "domU-12-31-39-01-70-B4:30001:1339652668:318525290" } update: { $set: { ping: new Date(1339652818708) } } nscanned:1 nupdated:1 keyUpdates:1 locks(micros) r:70415 w:219366 211ms
m30999| Thu Jun 14 01:47:04 [Balancer] creating new connection to:localhost:30000
m30999| Thu Jun 14 01:47:05 BackgroundJob starting: ConnectBG
m30999| Thu Jun 14 01:47:05 [Balancer] connected connection!
m30000| Thu Jun 14 01:47:05 [initandlisten] connection accepted from 127.0.0.1:39149 #22 (18 connections now open)
m30000| Thu Jun 14 01:47:05 [conn21] update config.mongos query: { _id: "domU-12-31-39-01-70-B4:30999" } update: { $set: { ping: new Date(1339652824997), up: 157, waiting: false } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) r:269 w:549840 549ms
m30000| Thu Jun 14 01:47:06 [conn22] query config.shards ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:378584 nreturned:2 reslen:120 378ms
m30999| Thu Jun 14 01:47:06 [Balancer] Refreshing MaxChunkSize: 1
m30000| Thu Jun 14 01:47:06 [conn22] query config.settings query: { _id: "chunksize" } ntoreturn:1 idhack:1 keyUpdates:0 locks(micros) r:881186 reslen:59 502ms
m30999| Thu Jun 14 01:47:06 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383:
m30999| { "state" : 1,
m30999| "who" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383:Balancer:846930886",
m30999| "process" : "domU-12-31-39-01-70-B4:30999:1339652667:1804289383",
m30999| "when" : { "$date" : "Thu Jun 14 01:47:06 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd97ada0d2fef4d6a507bee" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd97aba0d2fef4d6a507bec" } }
m30999| Thu Jun 14 01:47:06 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1339652667:1804289383' acquired, ts : 4fd97ada0d2fef4d6a507bee
m30999| Thu Jun 14 01:47:06 [Balancer] *** start balancing round
m30000| Thu Jun 14 01:47:07 [conn21] query config.collections ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:374373 w:549840 nreturned:2 reslen:239 374ms
m30000| Thu Jun 14 01:47:07 [conn22] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:881315 w:9764 reslen:1940 506ms
m30001| Thu Jun 14 01:46:50 [conn3] warning: log line attempted (19k) over max size(10k), printing beginning and end ... Thu Jun 14 01:47:08 [conn2] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:7 W:71 r:1765621 w:1313035 reslen:1777 720ms
m30000| Thu Jun 14 01:47:09 [conn21] query config.chunks query: { query: { ns: "test.foo" }, orderby: { min: 1 } } cursorid:2184198552782630926 ntoreturn:0 ntoskip:0 nscanned:102 keyUpdates:0 locks(micros) r:1288565 w:549840 nreturned:101 reslen:16764 914ms
m30999| Thu Jun 14 01:47:09 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:47:09 [Balancer] shard0000 maxSize: 0 currSize: 544 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:47:09 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:47:09 [Balancer] ---- ShardToChunksMap
m30999| Thu Jun 14 01:47:09 [Balancer] shard0000
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.07367152018367129 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_0.07367152018367129", lastmod: Timestamp 4000|256, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 0.07367152018367129 }, max: { a: 2.742599007396374 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_2.742599007396374", lastmod: Timestamp 4000|257, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 2.742599007396374 }, max: { a: 5.826356493812579 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_5.826356493812579", lastmod: Timestamp 4000|228, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 5.826356493812579 }, max: { a: 8.457858050974988 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_8.457858050974988", lastmod: Timestamp 4000|229, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 8.457858050974988 }, max: { a: 12.55217658236718 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_12.55217658236718", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 12.55217658236718 }, max: { a: 16.11151483141404 }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] shard0001
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_20.02617482801994", lastmod: Timestamp 4000|214, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 20.02617482801994 }, max: { a: 22.72135361925398 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_22.72135361925398", lastmod: Timestamp 4000|215, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 22.72135361925398 }, max: { a: 25.60273139230473 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_25.60273139230473", lastmod: Timestamp 2000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 25.60273139230473 }, max: { a: 30.85678137192671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_30.85678137192671", lastmod: Timestamp 4000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 30.85678137192671 }, max: { a: 34.95140019143683 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_34.95140019143683", lastmod: Timestamp 4000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 34.95140019143683 }, max: { a: 39.89992532263464 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_39.89992532263464", lastmod: Timestamp 4000|42, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 39.89992532263464 }, max: { a: 43.98990958864879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_43.98990958864879", lastmod: Timestamp 4000|43, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 43.98990958864879 }, max: { a: 47.94081917961535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_47.94081917961535", lastmod: Timestamp 4000|102, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 47.94081917961535 }, max: { a: 51.90923851177054 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_51.90923851177054", lastmod: Timestamp 4000|103, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 51.90923851177054 }, max: { a: 57.56464668319472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_57.56464668319472", lastmod: Timestamp 4000|34, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 57.56464668319472 }, max: { a: 61.76919454003927 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_61.76919454003927", lastmod: Timestamp 4000|35, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 61.76919454003927 }, max: { a: 66.37486853611429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_66.37486853611429", lastmod: Timestamp 4000|76, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 66.37486853611429 }, max: { a: 70.06331619195872 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_70.06331619195872", lastmod: Timestamp 4000|77, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 70.06331619195872 }, max: { a: 74.43717892117874 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_74.43717892117874", lastmod: Timestamp 4000|32, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 74.43717892117874 }, max: { a: 78.73686651492073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_78.73686651492073", lastmod: Timestamp 4000|33, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 78.73686651492073 }, max: { a: 83.77384564239721 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_83.77384564239721", lastmod: Timestamp 4000|120, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 83.77384564239721 }, max: { a: 87.41840730135154 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_87.41840730135154", lastmod: Timestamp 4000|196, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 87.41840730135154 }, max: { a: 89.89791872458619 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_89.89791872458619", lastmod: Timestamp 4000|197, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 89.89791872458619 }, max: { a: 92.91917824556573 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_92.91917824556573", lastmod: Timestamp 4000|220, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 92.91917824556573 }, max: { a: 95.6069228239147 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_95.6069228239147", lastmod: Timestamp 4000|244, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 95.6069228239147 }, max: { a: 98.16826107499755 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_98.16826107499755", lastmod: Timestamp 4000|245, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 98.16826107499755 }, max: { a: 101.960589257945 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_101.960589257945", lastmod: Timestamp 4000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 101.960589257945 }, max: { a: 106.0311910436654 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_106.0311910436654", lastmod: Timestamp 4000|45, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 106.0311910436654 }, max: { a: 111.0431509615952 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_111.0431509615952", lastmod: Timestamp 4000|78, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 111.0431509615952 }, max: { a: 114.9662096443472 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_114.9662096443472", lastmod: Timestamp 4000|156, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 114.9662096443472 }, max: { a: 118.3157678917793 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_118.3157678917793", lastmod: Timestamp 4000|157, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 118.3157678917793 }, max: { a: 123.1918419151289 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_123.1918419151289", lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 123.1918419151289 }, max: { a: 127.4590140914801 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_127.4590140914801", lastmod: Timestamp 4000|46, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 127.4590140914801 }, max: { a: 131.8115136015859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_131.8115136015859", lastmod: Timestamp 4000|47, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 131.8115136015859 }, max: { a: 136.5735165062921 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_136.5735165062921", lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 136.5735165062921 }, max: { a: 141.1884883168546 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_141.1884883168546", lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 141.1884883168546 }, max: { a: 146.6503611644078 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_146.6503611644078", lastmod: Timestamp 4000|112, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 146.6503611644078 }, max: { a: 150.1357777689222 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_150.1357777689222", lastmod: Timestamp 4000|116, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 150.1357777689222 }, max: { a: 153.684305048146 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_153.684305048146", lastmod: Timestamp 4000|117, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 153.684305048146 }, max: { a: 159.2125242384949 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_159.2125242384949", lastmod: Timestamp 4000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 159.2125242384949 }, max: { a: 163.3701742796004 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_163.3701742796004", lastmod: Timestamp 4000|51, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 163.3701742796004 }, max: { a: 167.6382092456179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_167.6382092456179", lastmod: Timestamp 4000|226, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 167.6382092456179 }, max: { a: 170.2748683082939 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_170.2748683082939", lastmod: Timestamp 4000|227, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 170.2748683082939 }, max: { a: 176.0230312595962 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_176.0230312595962", lastmod: Timestamp 4000|224, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 176.0230312595962 }, max: { a: 178.4802269484291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_178.4802269484291", lastmod: Timestamp 4000|225, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 178.4802269484291 }, max: { a: 181.7281932506388 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_181.7281932506388", lastmod: Timestamp 4000|154, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 181.7281932506388 }, max: { a: 184.9464054233513 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_184.9464054233513", lastmod: Timestamp 4000|155, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 184.9464054233513 }, max: { a: 188.6698238706465 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_188.6698238706465", lastmod: Timestamp 4000|164, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 188.6698238706465 }, max: { a: 191.5307698720086 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_191.5307698720086", lastmod: Timestamp 4000|165, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 191.5307698720086 }, max: { a: 194.8927257678023 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_194.8927257678023", lastmod: Timestamp 4000|94, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 194.8927257678023 }, max: { a: 198.5601903660538 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_198.5601903660538", lastmod: Timestamp 4000|95, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 198.5601903660538 }, max: { a: 204.0577089538382 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_204.0577089538382", lastmod: Timestamp 4000|184, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 204.0577089538382 }, max: { a: 207.0875453859469 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_207.0875453859469", lastmod: Timestamp 4000|185, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 207.0875453859469 }, max: { a: 209.8684815227433 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_209.8684815227433", lastmod: Timestamp 4000|204, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 209.8684815227433 }, max: { a: 212.8104857756458 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_212.8104857756458", lastmod: Timestamp 4000|205, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 212.8104857756458 }, max: { a: 216.8904302452864 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_216.8904302452864", lastmod: Timestamp 4000|100, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 216.8904302452864 }, max: { a: 220.5716558736682 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_220.5716558736682", lastmod: Timestamp 4000|248, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 220.5716558736682 }, max: { a: 222.9840106087572 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_222.9840106087572", lastmod: Timestamp 4000|249, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 222.9840106087572 }, max: { a: 225.5962198744838 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_225.5962198744838", lastmod: Timestamp 4000|170, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 225.5962198744838 }, max: { a: 228.7035403403385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_228.7035403403385", lastmod: Timestamp 4000|234, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 228.7035403403385 }, max: { a: 231.249558963907 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_231.249558963907", lastmod: Timestamp 4000|235, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 231.249558963907 }, max: { a: 233.8565055904641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_233.8565055904641", lastmod: Timestamp 4000|176, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 233.8565055904641 }, max: { a: 236.7690508533622 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_236.7690508533622", lastmod: Timestamp 4000|177, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 236.7690508533622 }, max: { a: 240.0709323500288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_240.0709323500288", lastmod: Timestamp 4000|210, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 240.0709323500288 }, max: { a: 242.6421093833427 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_242.6421093833427", lastmod: Timestamp 4000|266, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 242.6421093833427 }, max: { a: 245.1924455307789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_245.1924455307789", lastmod: Timestamp 4000|267, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 245.1924455307789 }, max: { a: 248.3080159156712 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_248.3080159156712", lastmod: Timestamp 4000|222, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 248.3080159156712 }, max: { a: 250.7993295308498 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_250.7993295308498", lastmod: Timestamp 4000|223, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 250.7993295308498 }, max: { a: 254.1395685736485 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_254.1395685736485", lastmod: Timestamp 4000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 254.1395685736485 }, max: { a: 258.6206493525194 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_258.6206493525194", lastmod: Timestamp 4000|264, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 258.6206493525194 }, max: { a: 261.2663901230094 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_261.2663901230094", lastmod: Timestamp 4000|265, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 261.2663901230094 }, max: { a: 264.0825842924789 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_264.0825842924789", lastmod: Timestamp 2000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 264.0825842924789 }, max: { a: 269.785248844529 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_269.785248844529", lastmod: Timestamp 2000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 269.785248844529 }, max: { a: 277.1560315461681 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_277.1560315461681", lastmod: Timestamp 4000|132, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 277.1560315461681 }, max: { a: 280.6827052136106 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_280.6827052136106", lastmod: Timestamp 4000|133, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 280.6827052136106 }, max: { a: 284.9747465988205 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_284.9747465988205", lastmod: Timestamp 4000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 284.9747465988205 }, max: { a: 289.7137301985317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_289.7137301985317", lastmod: Timestamp 4000|21, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 289.7137301985317 }, max: { a: 294.0222214358918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_294.0222214358918", lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 294.0222214358918 }, max: { a: 300.0603324337813 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_300.0603324337813", lastmod: Timestamp 4000|252, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 300.0603324337813 }, max: { a: 302.7151830329477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_302.7151830329477", lastmod: Timestamp 4000|253, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 302.7151830329477 }, max: { a: 309.3101713472285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_309.3101713472285", lastmod: Timestamp 4000|186, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 309.3101713472285 }, max: { a: 312.3135459595852 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_312.3135459595852", lastmod: Timestamp 4000|187, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 312.3135459595852 }, max: { a: 315.9151551096841 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_315.9151551096841", lastmod: Timestamp 2000|50, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 315.9151551096841 }, max: { a: 321.3459727153073 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_321.3459727153073", lastmod: Timestamp 4000|250, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 321.3459727153073 }, max: { a: 323.8729876956295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_323.8729876956295", lastmod: Timestamp 4000|251, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 323.8729876956295 }, max: { a: 327.5292321238884 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_327.5292321238884", lastmod: Timestamp 4000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 327.5292321238884 }, max: { a: 331.4018789379612 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_331.4018789379612", lastmod: Timestamp 4000|128, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 331.4018789379612 }, max: { a: 334.3168575448847 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_334.3168575448847", lastmod: Timestamp 4000|129, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 334.3168575448847 }, max: { a: 337.6965417950217 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_337.6965417950217", lastmod: Timestamp 4000|212, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 337.6965417950217 }, max: { a: 340.4008653065953 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_340.4008653065953", lastmod: Timestamp 4000|213, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 340.4008653065953 }, max: { a: 344.8762285660836 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_344.8762285660836", lastmod: Timestamp 4000|48, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 344.8762285660836 }, max: { a: 349.1094580993942 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_349.1094580993942", lastmod: Timestamp 4000|49, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 349.1094580993942 }, max: { a: 353.2720479801309 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_353.2720479801309", lastmod: Timestamp 4000|240, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 353.2720479801309 }, max: { a: 355.8076820303829 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_355.8076820303829", lastmod: Timestamp 4000|241, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 355.8076820303829 }, max: { a: 358.3343339611492 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_358.3343339611492", lastmod: Timestamp 4000|232, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 358.3343339611492 }, max: { a: 360.7881657776425 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_360.7881657776425", lastmod: Timestamp 4000|233, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 360.7881657776425 }, max: { a: 363.6779080113047 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_363.6779080113047", lastmod: Timestamp 2000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 363.6779080113047 }, max: { a: 369.0981926515277 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_369.0981926515277", lastmod: Timestamp 4000|26, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 369.0981926515277 }, max: { a: 373.3849373054079 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_373.3849373054079", lastmod: Timestamp 4000|27, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 373.3849373054079 }, max: { a: 378.3565272980204 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_378.3565272980204", lastmod: Timestamp 4000|242, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 378.3565272980204 }, max: { a: 380.9471963970786 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_380.9471963970786", lastmod: Timestamp 4000|243, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 380.9471963970786 }, max: { a: 383.7239757530736 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_383.7239757530736", lastmod: Timestamp 4000|40, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 383.7239757530736 }, max: { a: 387.7659705009871 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_387.7659705009871", lastmod: Timestamp 4000|41, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 387.7659705009871 }, max: { a: 392.8718206829087 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_392.8718206829087", lastmod: Timestamp 4000|208, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 392.8718206829087 }, max: { a: 395.6502767966605 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_395.6502767966605", lastmod: Timestamp 4000|258, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 395.6502767966605 }, max: { a: 398.1780778922134 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_398.1780778922134", lastmod: Timestamp 4000|259, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 398.1780778922134 }, max: { a: 400.6101810646703 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_400.6101810646703", lastmod: Timestamp 4000|104, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 400.6101810646703 }, max: { a: 404.1458625239371 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_404.1458625239371", lastmod: Timestamp 4000|166, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 404.1458625239371 }, max: { a: 407.0796926580036 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_407.0796926580036", lastmod: Timestamp 4000|167, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 407.0796926580036 }, max: { a: 411.0287894698923 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_411.0287894698923", lastmod: Timestamp 4000|174, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 411.0287894698923 }, max: { a: 413.7945438036655 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_413.7945438036655", lastmod: Timestamp 4000|175, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 413.7945438036655 }, max: { a: 417.3437896431063 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_417.3437896431063", lastmod: Timestamp 2000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 417.3437896431063 }, max: { a: 422.4151431966537 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_422.4151431966537", lastmod: Timestamp 2000|67, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 422.4151431966537 }, max: { a: 427.2300955074828 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_427.2300955074828", lastmod: Timestamp 4000|160, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 427.2300955074828 }, max: { a: 430.2130944220548 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_430.2130944220548", lastmod: Timestamp 4000|161, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 430.2130944220548 }, max: { a: 433.3806610330477 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_433.3806610330477", lastmod: Timestamp 4000|74, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 433.3806610330477 }, max: { a: 437.040103636678 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_437.040103636678", lastmod: Timestamp 4000|75, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 437.040103636678 }, max: { a: 441.0435238853461 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_441.0435238853461", lastmod: Timestamp 4000|236, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 441.0435238853461 }, max: { a: 443.7079718299926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_443.7079718299926", lastmod: Timestamp 4000|237, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 443.7079718299926 }, max: { a: 447.8806134954977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_447.8806134954977", lastmod: Timestamp 4000|98, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 447.8806134954977 }, max: { a: 451.8120411874291 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_451.8120411874291", lastmod: Timestamp 4000|99, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 451.8120411874291 }, max: { a: 456.4586339452165 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_456.4586339452165", lastmod: Timestamp 4000|136, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 456.4586339452165 }, max: { a: 459.7315330482733 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_459.7315330482733", lastmod: Timestamp 4000|137, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 459.7315330482733 }, max: { a: 463.2766201180535 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_463.2766201180535", lastmod: Timestamp 4000|218, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 463.2766201180535 }, max: { a: 466.1607312365173 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_466.1607312365173", lastmod: Timestamp 4000|219, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 466.1607312365173 }, max: { a: 473.1445991105042 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_473.1445991105042", lastmod: Timestamp 4000|66, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 473.1445991105042 }, max: { a: 477.2807394020033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_477.2807394020033", lastmod: Timestamp 4000|142, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 477.2807394020033 }, max: { a: 480.2747403619077 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_480.2747403619077", lastmod: Timestamp 4000|143, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 480.2747403619077 }, max: { a: 483.6281235892167 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_483.6281235892167", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 483.6281235892167 }, max: { a: 490.1028421929578 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_490.1028421929578", lastmod: Timestamp 4000|110, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 490.1028421929578 }, max: { a: 493.6797279933101 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_493.6797279933101", lastmod: Timestamp 4000|111, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 493.6797279933101 }, max: { a: 498.2021416153332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_498.2021416153332", lastmod: Timestamp 4000|124, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 498.2021416153332 }, max: { a: 501.5945768521381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_501.5945768521381", lastmod: Timestamp 4000|246, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 501.5945768521381 }, max: { a: 503.8814286501491 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_503.8814286501491", lastmod: Timestamp 4000|247, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 503.8814286501491 }, max: { a: 506.5947777056855 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_506.5947777056855", lastmod: Timestamp 4000|36, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 506.5947777056855 }, max: { a: 510.639225969218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_510.639225969218", lastmod: Timestamp 4000|37, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 510.639225969218 }, max: { a: 515.6449770586091 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_515.6449770586091", lastmod: Timestamp 4000|262, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 515.6449770586091 }, max: { a: 518.2463999492195 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_518.2463999492195", lastmod: Timestamp 4000|263, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 518.2463999492195 }, max: { a: 521.3538677091974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_521.3538677091974", lastmod: Timestamp 2000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 521.3538677091974 }, max: { a: 526.919018850918 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_526.919018850918", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 526.919018850918 }, max: { a: 531.7597013546634 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_531.7597013546634", lastmod: Timestamp 4000|22, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 531.7597013546634 }, max: { a: 536.0462960134931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_536.0462960134931", lastmod: Timestamp 4000|188, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 536.0462960134931 }, max: { a: 539.1281234038355 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_539.1281234038355", lastmod: Timestamp 4000|189, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 539.1281234038355 }, max: { a: 542.4296058071777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_542.4296058071777", lastmod: Timestamp 4000|108, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 542.4296058071777 }, max: { a: 545.8257932837977 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_545.8257932837977", lastmod: Timestamp 4000|150, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 545.8257932837977 }, max: { a: 548.9817180888258 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_548.9817180888258", lastmod: Timestamp 4000|151, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 548.9817180888258 }, max: { a: 552.1925267328988 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_552.1925267328988", lastmod: Timestamp 4000|260, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 552.1925267328988 }, max: { a: 554.5352736346487 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_554.5352736346487", lastmod: Timestamp 4000|261, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 554.5352736346487 }, max: { a: 558.0115575910545 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_558.0115575910545", lastmod: Timestamp 4000|182, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 558.0115575910545 }, max: { a: 560.838593433049 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_560.838593433049", lastmod: Timestamp 4000|183, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 560.838593433049 }, max: { a: 563.897889911273 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_563.897889911273", lastmod: Timestamp 4000|92, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 563.897889911273 }, max: { a: 567.3645636091692 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_567.3645636091692", lastmod: Timestamp 4000|93, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 567.3645636091692 }, max: { a: 571.914212129846 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_571.914212129846", lastmod: Timestamp 4000|130, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 571.914212129846 }, max: { a: 575.2102660145707 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_575.2102660145707", lastmod: Timestamp 4000|131, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 575.2102660145707 }, max: { a: 580.4600029065366 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_580.4600029065366", lastmod: Timestamp 4000|68, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 580.4600029065366 }, max: { a: 584.4225320226172 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_584.4225320226172", lastmod: Timestamp 4000|180, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 584.4225320226172 }, max: { a: 587.1685851091131 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_587.1685851091131", lastmod: Timestamp 4000|181, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 587.1685851091131 }, max: { a: 590.8997745355827 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_590.8997745355827", lastmod: Timestamp 4000|90, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 590.8997745355827 }, max: { a: 594.3878051880898 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_594.3878051880898", lastmod: Timestamp 4000|91, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 594.3878051880898 }, max: { a: 599.2155367136296 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_599.2155367136296", lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 599.2155367136296 }, max: { a: 603.53104016638 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_603.53104016638", lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 603.53104016638 }, max: { a: 610.6068178358934 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_610.6068178358934", lastmod: Timestamp 2000|62, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 610.6068178358934 }, max: { a: 615.3266278873516 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_615.3266278873516", lastmod: Timestamp 4000|238, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 615.3266278873516 }, max: { a: 617.9571577143996 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_617.9571577143996", lastmod: Timestamp 4000|239, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 617.9571577143996 }, max: { a: 623.3985075048967 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_623.3985075048967", lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 623.3985075048967 }, max: { a: 628.1995001147562 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_628.1995001147562", lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 628.1995001147562 }, max: { a: 632.4786347534061 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_632.4786347534061", lastmod: Timestamp 4000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 632.4786347534061 }, max: { a: 636.2085863336085 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_636.2085863336085", lastmod: Timestamp 4000|61, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 636.2085863336085 }, max: { a: 640.7093733209429 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_640.7093733209429", lastmod: Timestamp 4000|84, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 640.7093733209429 }, max: { a: 644.4017960752651 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_644.4017960752651", lastmod: Timestamp 4000|85, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 644.4017960752651 }, max: { a: 648.6747268265868 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_648.6747268265868", lastmod: Timestamp 4000|52, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 648.6747268265868 }, max: { a: 652.9401841699823 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_652.9401841699823", lastmod: Timestamp 4000|53, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 652.9401841699823 }, max: { a: 657.3538695372831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_657.3538695372831", lastmod: Timestamp 4000|138, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 657.3538695372831 }, max: { a: 660.6896106858891 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_660.6896106858891", lastmod: Timestamp 4000|139, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 660.6896106858891 }, max: { a: 664.5574284897642 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_664.5574284897642", lastmod: Timestamp 4000|38, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 664.5574284897642 }, max: { a: 668.6362621623331 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_668.6362621623331", lastmod: Timestamp 4000|88, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 668.6362621623331 }, max: { a: 672.2870891659105 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_672.2870891659105", lastmod: Timestamp 4000|206, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 672.2870891659105 }, max: { a: 675.1811603867598 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_675.1811603867598", lastmod: Timestamp 4000|207, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 675.1811603867598 }, max: { a: 678.3563510786536 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_678.3563510786536", lastmod: Timestamp 4000|158, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 678.3563510786536 }, max: { a: 681.3003030169281 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_681.3003030169281", lastmod: Timestamp 4000|159, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 681.3003030169281 }, max: { a: 685.0292821001574 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_685.0292821001574", lastmod: Timestamp 4000|28, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 685.0292821001574 }, max: { a: 689.5707127489441 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_689.5707127489441", lastmod: Timestamp 4000|29, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 689.5707127489441 }, max: { a: 694.6501944983177 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_694.6501944983177", lastmod: Timestamp 4000|106, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 694.6501944983177 }, max: { a: 698.4329238257609 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_698.4329238257609", lastmod: Timestamp 4000|107, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 698.4329238257609 }, max: { a: 703.7520953686671 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_703.7520953686671", lastmod: Timestamp 2000|64, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 703.7520953686671 }, max: { a: 708.8986861220777 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_708.8986861220777", lastmod: Timestamp 2000|65, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 708.8986861220777 }, max: { a: 714.0536251380356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_714.0536251380356", lastmod: Timestamp 4000|198, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 714.0536251380356 }, max: { a: 717.0859810000978 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_717.0859810000978", lastmod: Timestamp 4000|199, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 717.0859810000978 }, max: { a: 721.9923962351373 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_721.9923962351373", lastmod: Timestamp 4000|82, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 721.9923962351373 }, max: { a: 725.5771489434317 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_725.5771489434317", lastmod: Timestamp 4000|83, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 725.5771489434317 }, max: { a: 729.8361633348899 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_729.8361633348899", lastmod: Timestamp 4000|144, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 729.8361633348899 }, max: { a: 732.9348251743502 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_732.9348251743502", lastmod: Timestamp 4000|202, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 732.9348251743502 }, max: { a: 735.4457009121708 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_735.4457009121708", lastmod: Timestamp 4000|203, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 735.4457009121708 }, max: { a: 738.6198156338151 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_738.6198156338151", lastmod: Timestamp 4000|192, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 738.6198156338151 }, max: { a: 741.3245176669844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_741.3245176669844", lastmod: Timestamp 4000|193, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 741.3245176669844 }, max: { a: 744.9210849408088 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_744.9210849408088", lastmod: Timestamp 4000|80, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 744.9210849408088 }, max: { a: 748.6872188241756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_748.6872188241756", lastmod: Timestamp 4000|81, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 748.6872188241756 }, max: { a: 752.6019558395919 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_752.6019558395919", lastmod: Timestamp 4000|54, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 752.6019558395919 }, max: { a: 756.637103632288 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_756.637103632288", lastmod: Timestamp 4000|55, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 756.637103632288 }, max: { a: 761.349721153896 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_761.349721153896", lastmod: Timestamp 4000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 761.349721153896 }, max: { a: 765.2211241548246 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_765.2211241548246", lastmod: Timestamp 4000|140, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 765.2211241548246 }, max: { a: 768.6399184840259 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_768.6399184840259", lastmod: Timestamp 4000|141, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 768.6399184840259 }, max: { a: 773.3799848158397 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_773.3799848158397", lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 773.3799848158397 }, max: { a: 777.6503149863191 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_777.6503149863191", lastmod: Timestamp 4000|148, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 777.6503149863191 }, max: { a: 780.6933276463033 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_780.6933276463033", lastmod: Timestamp 4000|149, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 780.6933276463033 }, max: { a: 784.2714953599016 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_784.2714953599016", lastmod: Timestamp 4000|200, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 784.2714953599016 }, max: { a: 787.2181223195419 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_787.2181223195419", lastmod: Timestamp 4000|201, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 787.2181223195419 }, max: { a: 790.298943411581 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_790.298943411581", lastmod: Timestamp 4000|114, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 790.298943411581 }, max: { a: 793.7120312511385 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_793.7120312511385", lastmod: Timestamp 4000|115, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 793.7120312511385 }, max: { a: 797.6352444405507 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_797.6352444405507", lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 797.6352444405507 }, max: { a: 802.4966878498034 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_802.4966878498034", lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 802.4966878498034 }, max: { a: 807.4105833931693 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_807.4105833931693", lastmod: Timestamp 4000|118, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 807.4105833931693 }, max: { a: 810.8918013325706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_810.8918013325706", lastmod: Timestamp 4000|119, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 810.8918013325706 }, max: { a: 815.7684070742035 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_815.7684070742035", lastmod: Timestamp 2000|60, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 815.7684070742035 }, max: { a: 821.178966084225 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_821.178966084225", lastmod: Timestamp 4000|178, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 821.178966084225 }, max: { a: 824.2680954051706 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_824.2680954051706", lastmod: Timestamp 4000|179, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 824.2680954051706 }, max: { a: 827.5642418995561 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_827.5642418995561", lastmod: Timestamp 2000|30, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 827.5642418995561 }, max: { a: 833.5963963333859 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_833.5963963333859", lastmod: Timestamp 4000|216, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 833.5963963333859 }, max: { a: 836.3608305125814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_836.3608305125814", lastmod: Timestamp 4000|217, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 836.3608305125814 }, max: { a: 840.7121644073931 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_840.7121644073931", lastmod: Timestamp 4000|122, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 840.7121644073931 }, max: { a: 843.8858257205128 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_843.8858257205128", lastmod: Timestamp 4000|123, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 843.8858257205128 }, max: { a: 848.2332478721062 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_848.2332478721062", lastmod: Timestamp 4000|162, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 848.2332478721062 }, max: { a: 851.468355264985 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_851.468355264985", lastmod: Timestamp 4000|163, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 851.468355264985 }, max: { a: 855.8703567421647 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_855.8703567421647", lastmod: Timestamp 2000|20, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 855.8703567421647 }, max: { a: 861.9626177544285 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_861.9626177544285", lastmod: Timestamp 4000|172, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 861.9626177544285 }, max: { a: 864.7746195980726 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_864.7746195980726", lastmod: Timestamp 4000|173, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 864.7746195980726 }, max: { a: 868.5788679342879 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_868.5788679342879", lastmod: Timestamp 2000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 868.5788679342879 }, max: { a: 873.8718881199745 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_873.8718881199745", lastmod: Timestamp 4000|86, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 873.8718881199745 }, max: { a: 877.8438233640235 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_877.8438233640235", lastmod: Timestamp 4000|87, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 877.8438233640235 }, max: { a: 882.331873780809 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_882.331873780809", lastmod: Timestamp 4000|58, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 882.331873780809 }, max: { a: 886.5207670748756 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_886.5207670748756", lastmod: Timestamp 4000|59, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 886.5207670748756 }, max: { a: 891.8750702869381 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_891.8750702869381", lastmod: Timestamp 4000|190, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 891.8750702869381 }, max: { a: 894.8106130543974 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_894.8106130543974", lastmod: Timestamp 4000|191, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 894.8106130543974 }, max: { a: 898.6566515076229 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_898.6566515076229", lastmod: Timestamp 4000|168, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 898.6566515076229 }, max: { a: 901.6037051063506 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_901.6037051063506", lastmod: Timestamp 4000|169, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 901.6037051063506 }, max: { a: 905.2934559328332 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_905.2934559328332", lastmod: Timestamp 4000|254, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 905.2934559328332 }, max: { a: 907.8304631917699 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_907.8304631917699", lastmod: Timestamp 4000|255, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 907.8304631917699 }, max: { a: 910.9608546053483 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_910.9608546053483", lastmod: Timestamp 4000|146, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 910.9608546053483 }, max: { a: 914.1361338478089 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_914.1361338478089", lastmod: Timestamp 4000|147, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 914.1361338478089 }, max: { a: 918.4259760765641 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_918.4259760765641", lastmod: Timestamp 4000|126, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 918.4259760765641 }, max: { a: 921.5853246168082 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_921.5853246168082", lastmod: Timestamp 4000|127, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 921.5853246168082 }, max: { a: 927.6813889109981 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_927.6813889109981", lastmod: Timestamp 2000|56, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 927.6813889109981 }, max: { a: 933.0462189495814 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_933.0462189495814", lastmod: Timestamp 2000|57, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 933.0462189495814 }, max: { a: 938.1160661714987 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_938.1160661714987", lastmod: Timestamp 2000|70, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 938.1160661714987 }, max: { a: 943.2489828660326 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_943.2489828660326", lastmod: Timestamp 2000|71, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 943.2489828660326 }, max: { a: 948.0165404542549 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_948.0165404542549", lastmod: Timestamp 4000|152, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 948.0165404542549 }, max: { a: 951.1531632632295 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_951.1531632632295", lastmod: Timestamp 4000|153, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 951.1531632632295 }, max: { a: 955.9182567868356 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_955.9182567868356", lastmod: Timestamp 4000|24, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 955.9182567868356 }, max: { a: 960.5824651536831 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_960.5824651536831", lastmod: Timestamp 4000|25, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 960.5824651536831 }, max: { a: 964.9150523226922 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_964.9150523226922", lastmod: Timestamp 2000|44, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 964.9150523226922 }, max: { a: 970.39026226179 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_970.39026226179", lastmod: Timestamp 4000|194, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 970.39026226179 }, max: { a: 973.4895868865218 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_973.4895868865218", lastmod: Timestamp 4000|195, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 973.4895868865218 }, max: { a: 977.1164746659301 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_977.1164746659301", lastmod: Timestamp 4000|96, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 977.1164746659301 }, max: { a: 980.667776515926 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_980.667776515926", lastmod: Timestamp 4000|97, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 980.667776515926 }, max: { a: 985.6773819217475 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_985.6773819217475", lastmod: Timestamp 4000|230, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 985.6773819217475 }, max: { a: 988.3510075746844 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_988.3510075746844", lastmod: Timestamp 4000|231, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 988.3510075746844 }, max: { a: 991.2502100401695 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_991.2502100401695", lastmod: Timestamp 4000|134, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 991.2502100401695 }, max: { a: 994.7222740534528 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_994.7222740534528", lastmod: Timestamp 4000|135, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 994.7222740534528 }, max: { a: 998.3975234740553 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.foo-a_998.3975234740553", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 998.3975234740553 }, max: { a: MaxKey }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] ----
m30999| Thu Jun 14 01:47:09 [Balancer] collection : test.foo
m30999| Thu Jun 14 01:47:09 [Balancer] donor : 255 chunks on shard0001
m30999| Thu Jun 14 01:47:09 [Balancer] receiver : 6 chunks on shard0000
m30999| Thu Jun 14 01:47:09 [Balancer] chose [shard0001] to [shard0000] { _id: "test.foo-a_16.11151483141404", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('4fd97a3b0d2fef4d6a507be2'), ns: "test.foo", min: { a: 16.11151483141404 }, max: { a: 20.02617482801994 }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] ---- ShardInfoMap
m30999| Thu Jun 14 01:47:09 [Balancer] shard0000 maxSize: 0 currSize: 544 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:47:09 [Balancer] shard0001 maxSize: 0 currSize: 1023 draining: 0 hasOpsQueued: 0
m30999| Thu Jun 14 01:47:09 [Balancer] ---- ShardToChunksMap
m30000| Thu Jun 14 01:47:09 [conn21] getmore config.chunks query: { query: { ns: "test.mrShardedOut" }, orderby: { min: 1 } } cursorid:2850858621938954405 ntoreturn:0 keyUpdates:0 locks(micros) r:1465615 w:549840 nreturned:105 reslen:22793 124ms
m30999| Thu Jun 14 01:47:09 [Balancer] shard0000
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff228c8')", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, max: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22c95')", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, max: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff2305f')", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, max: { _id: ObjectId('4fd97a3d05a35677eff23246') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2342c')", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, max: { _id: ObjectId('4fd97a3d05a35677eff23611') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff237f5')", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, max: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23bc4')", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, max: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23f8f')", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24176') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2435d')", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, max: { _id: ObjectId('4fd97a3d05a35677eff24541') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24727')", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24727') }, max: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24af4')", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, max: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24ec4')", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, max: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25295')", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25295') }, max: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25663')", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25663') }, max: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25a31')", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, max: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25e01')", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, max: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff261d0')", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, max: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff26598')", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff26598') }, max: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26964')", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26964') }, max: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26d35')", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, max: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27105')", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27105') }, max: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff274d5')", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, max: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff278a1')", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, max: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27c6f')", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2803f')", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, max: { _id: ObjectId('4fd97a3f05a35677eff28226') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff2840d')", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, max: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff287d7')", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff287d7') }, max: { _id: ObjectId('4fd97a4005a35677eff289bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28ba4')", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, max: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28f71')", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28f71') }, max: { _id: ObjectId('4fd97a4005a35677eff29159') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2933f')", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2933f') }, max: { _id: ObjectId('4fd97a4005a35677eff29523') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29708')", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29708') }, max: { _id: ObjectId('4fd97a4005a35677eff298ed') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29ad4')", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, max: { _id: ObjectId('4fd97a4005a35677eff29cba') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29e9f')", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, max: { _id: ObjectId('4fd97a4005a35677eff2a086') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a26b')", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, max: { _id: ObjectId('4fd97a4005a35677eff2a450') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a636')", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a636') }, max: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2aa03')", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, max: { _id: ObjectId('4fd97a4105a35677eff2abea') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2add0')", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2add0') }, max: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b1a0')", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, max: { _id: ObjectId('4fd97a4105a35677eff2b387') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b56f')", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, max: { _id: ObjectId('4fd97a4105a35677eff2b757') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b93b')", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, max: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bd07')", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, max: { _id: ObjectId('4fd97a4205a35677eff2beee') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c0d4')", lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, max: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c4a2')", lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, max: { _id: ObjectId('4fd97a4205a35677eff2c687') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c86f')", lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, max: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2cc39')", lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, max: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d008')", lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d008') }, max: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d3d5')", lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, max: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d7a1')", lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, max: { _id: ObjectId('4fd97a4305a35677eff2d986') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2db6f')", lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, max: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2df3e')", lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, max: { _id: ObjectId('4fd97a4305a35677eff2e127') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e30d')", lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, max: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e6d8')", lastmod: Timestamp 1000|101, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, max: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2eaa5')", lastmod: Timestamp 1000|103, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, max: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ee6d')", lastmod: Timestamp 1000|105, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, max: { _id: ObjectId('4fd97a4305a35677eff2f052') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f239')", lastmod: Timestamp 1000|107, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f239') }, max: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f603')", lastmod: Timestamp 1000|109, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f603') }, max: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f9cd')", lastmod: Timestamp 1000|111, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, max: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fd9a')", lastmod: Timestamp 1000|113, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, max: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3016a')", lastmod: Timestamp 1000|115, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3016a') }, max: { _id: ObjectId('4fd97a4405a35677eff30351') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30537')", lastmod: Timestamp 1000|117, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30537') }, max: { _id: ObjectId('4fd97a4405a35677eff30721') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30907')", lastmod: Timestamp 1000|119, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30907') }, max: { _id: ObjectId('4fd97a4405a35677eff30aef') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30cd5')", lastmod: Timestamp 1000|121, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, max: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff310a7')", lastmod: Timestamp 1000|123, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff310a7') }, max: { _id: ObjectId('4fd97a4405a35677eff3128e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31473')", lastmod: Timestamp 1000|125, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31473') }, max: { _id: ObjectId('4fd97a4405a35677eff3165b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31841')", lastmod: Timestamp 1000|127, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31841') }, max: { _id: ObjectId('4fd97a4405a35677eff31a28') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31c0d')", lastmod: Timestamp 1000|129, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, max: { _id: ObjectId('4fd97a4405a35677eff31df3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31fda')", lastmod: Timestamp 1000|131, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31fda') }, max: { _id: ObjectId('4fd97a4405a35677eff321bf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff323a4')", lastmod: Timestamp 1000|133, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff323a4') }, max: { _id: ObjectId('4fd97a4405a35677eff3258c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32774')", lastmod: Timestamp 1000|135, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32774') }, max: { _id: ObjectId('4fd97a4505a35677eff32958') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32b3d')", lastmod: Timestamp 1000|137, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, max: { _id: ObjectId('4fd97a4505a35677eff32d23') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32f0c')", lastmod: Timestamp 1000|139, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, max: { _id: ObjectId('4fd97a4505a35677eff330f5') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff332d9')", lastmod: Timestamp 1000|141, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff332d9') }, max: { _id: ObjectId('4fd97a4505a35677eff334c2') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff336ab')", lastmod: Timestamp 1000|143, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff336ab') }, max: { _id: ObjectId('4fd97a4505a35677eff33891') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33a77')", lastmod: Timestamp 1000|145, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33a77') }, max: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33e41')", lastmod: Timestamp 1000|147, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33e41') }, max: { _id: ObjectId('4fd97a4605a35677eff34026') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff3420d')", lastmod: Timestamp 1000|149, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff3420d') }, max: { _id: ObjectId('4fd97a4605a35677eff343f3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff345d9')", lastmod: Timestamp 1000|151, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff345d9') }, max: { _id: ObjectId('4fd97a4605a35677eff347c1') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff349a9')", lastmod: Timestamp 1000|153, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff349a9') }, max: { _id: ObjectId('4fd97a4705a35677eff34b90') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34d79')", lastmod: Timestamp 1000|155, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34d79') }, max: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35147')", lastmod: Timestamp 1000|157, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35147') }, max: { _id: ObjectId('4fd97a4705a35677eff3532c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35511')", lastmod: Timestamp 1000|159, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35511') }, max: { _id: ObjectId('4fd97a4705a35677eff356fa') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff358e1')", lastmod: Timestamp 1000|161, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff358e1') }, max: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35cab')", lastmod: Timestamp 1000|163, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35cab') }, max: { _id: ObjectId('4fd97a4705a35677eff35e91') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3607a')", lastmod: Timestamp 1000|165, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3607a') }, max: { _id: ObjectId('4fd97a4805a35677eff3625f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36447')", lastmod: Timestamp 1000|167, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36447') }, max: { _id: ObjectId('4fd97a4805a35677eff3662c') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36814')", lastmod: Timestamp 1000|169, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36814') }, max: { _id: ObjectId('4fd97a4805a35677eff369f9') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36be0')", lastmod: Timestamp 1000|171, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36be0') }, max: { _id: ObjectId('4fd97a4805a35677eff36dca') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36faf')", lastmod: Timestamp 1000|173, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36faf') }, max: { _id: ObjectId('4fd97a4805a35677eff37195') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3737a')", lastmod: Timestamp 1000|175, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3737a') }, max: { _id: ObjectId('4fd97a4805a35677eff37560') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37747')", lastmod: Timestamp 1000|177, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37747') }, max: { _id: ObjectId('4fd97a4905a35677eff3792f') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37b15')", lastmod: Timestamp 1000|179, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37b15') }, max: { _id: ObjectId('4fd97a4905a35677eff37cff') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37ee8')", lastmod: Timestamp 1000|181, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, max: { _id: ObjectId('4fd97a4905a35677eff380d0') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff382b9')", lastmod: Timestamp 1000|183, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff382b9') }, max: { _id: ObjectId('4fd97a4905a35677eff3849e') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38684')", lastmod: Timestamp 1000|185, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38684') }, max: { _id: ObjectId('4fd97a4905a35677eff38869') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38a4e')", lastmod: Timestamp 1000|187, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, max: { _id: ObjectId('4fd97a4905a35677eff38c32') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38e1d')", lastmod: Timestamp 1000|189, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, max: { _id: ObjectId('4fd97a4905a35677eff39001') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff391e8')", lastmod: Timestamp 1000|191, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff391e8') }, max: { _id: ObjectId('4fd97a4905a35677eff393cf') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff395b6')", lastmod: Timestamp 1000|193, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff395b6') }, max: { _id: ObjectId('4fd97a4905a35677eff3979b') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39985')", lastmod: Timestamp 1000|195, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39985') }, max: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39d51')", lastmod: Timestamp 1000|197, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, max: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a121')", lastmod: Timestamp 1000|199, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a4ed')", lastmod: Timestamp 1000|201, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a8b9')", lastmod: Timestamp 1000|203, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, max: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3ac84')", lastmod: Timestamp 1000|205, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Thu Jun 14 01:47:09 [Balancer] shard0001
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('4fd97a3c05a35677eff228c8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22aac')", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22aac') }, max: { _id: ObjectId('4fd97a3c05a35677eff22c95') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3c05a35677eff22e7b')", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3c05a35677eff22e7b') }, max: { _id: ObjectId('4fd97a3c05a35677eff2305f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23246')", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23246') }, max: { _id: ObjectId('4fd97a3d05a35677eff2342c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23611')", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23611') }, max: { _id: ObjectId('4fd97a3d05a35677eff237f5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff239dc')", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff239dc') }, max: { _id: ObjectId('4fd97a3d05a35677eff23bc4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff23da9')", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff23da9') }, max: { _id: ObjectId('4fd97a3d05a35677eff23f8f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24176')", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24176') }, max: { _id: ObjectId('4fd97a3d05a35677eff2435d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24541')", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24541') }, max: { _id: ObjectId('4fd97a3d05a35677eff24727') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff2490f')", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff2490f') }, max: { _id: ObjectId('4fd97a3d05a35677eff24af4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3d05a35677eff24cde')", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3d05a35677eff24cde') }, max: { _id: ObjectId('4fd97a3d05a35677eff24ec4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff250ad')", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff250ad') }, max: { _id: ObjectId('4fd97a3e05a35677eff25295') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2547d')", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2547d') }, max: { _id: ObjectId('4fd97a3e05a35677eff25663') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2584a')", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2584a') }, max: { _id: ObjectId('4fd97a3e05a35677eff25a31') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25c16')", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25c16') }, max: { _id: ObjectId('4fd97a3e05a35677eff25e01') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff25fe8')", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff25fe8') }, max: { _id: ObjectId('4fd97a3e05a35677eff261d0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff263b4')", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff263b4') }, max: { _id: ObjectId('4fd97a3e05a35677eff26598') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3e05a35677eff2677e')", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3e05a35677eff2677e') }, max: { _id: ObjectId('4fd97a3f05a35677eff26964') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26b4c')", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26b4c') }, max: { _id: ObjectId('4fd97a3f05a35677eff26d35') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff26f1f')", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff26f1f') }, max: { _id: ObjectId('4fd97a3f05a35677eff27105') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff272ec')", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff272ec') }, max: { _id: ObjectId('4fd97a3f05a35677eff274d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff276ba')", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff276ba') }, max: { _id: ObjectId('4fd97a3f05a35677eff278a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27a87')", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27a87') }, max: { _id: ObjectId('4fd97a3f05a35677eff27c6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff27e57')", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff27e57') }, max: { _id: ObjectId('4fd97a3f05a35677eff2803f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff28226')", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff28226') }, max: { _id: ObjectId('4fd97a3f05a35677eff2840d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a3f05a35677eff285f3')", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a3f05a35677eff285f3') }, max: { _id: ObjectId('4fd97a4005a35677eff287d7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff289bf')", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff289bf') }, max: { _id: ObjectId('4fd97a4005a35677eff28ba4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff28d8b')", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff28d8b') }, max: { _id: ObjectId('4fd97a4005a35677eff28f71') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29159')", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29159') }, max: { _id: ObjectId('4fd97a4005a35677eff2933f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29523')", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29523') }, max: { _id: ObjectId('4fd97a4005a35677eff29708') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff298ed')", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff298ed') }, max: { _id: ObjectId('4fd97a4005a35677eff29ad4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff29cba')", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff29cba') }, max: { _id: ObjectId('4fd97a4005a35677eff29e9f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a086')", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a086') }, max: { _id: ObjectId('4fd97a4005a35677eff2a26b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4005a35677eff2a450')", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4005a35677eff2a450') }, max: { _id: ObjectId('4fd97a4105a35677eff2a636') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2a81d')", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2a81d') }, max: { _id: ObjectId('4fd97a4105a35677eff2aa03') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2abea')", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2abea') }, max: { _id: ObjectId('4fd97a4105a35677eff2add0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2afb8')", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2afb8') }, max: { _id: ObjectId('4fd97a4105a35677eff2b1a0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b387')", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b387') }, max: { _id: ObjectId('4fd97a4105a35677eff2b56f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2b757')", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2b757') }, max: { _id: ObjectId('4fd97a4105a35677eff2b93b') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4105a35677eff2bb23')", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4105a35677eff2bb23') }, max: { _id: ObjectId('4fd97a4105a35677eff2bd07') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2beee')", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2beee') }, max: { _id: ObjectId('4fd97a4205a35677eff2c0d4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c2bb')", lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c2bb') }, max: { _id: ObjectId('4fd97a4205a35677eff2c4a2') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2c687')", lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2c687') }, max: { _id: ObjectId('4fd97a4205a35677eff2c86f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ca54')", lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ca54') }, max: { _id: ObjectId('4fd97a4205a35677eff2cc39') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2ce20')", lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2ce20') }, max: { _id: ObjectId('4fd97a4205a35677eff2d008') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d1ef')", lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d1ef') }, max: { _id: ObjectId('4fd97a4205a35677eff2d3d5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4205a35677eff2d5bc')", lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4205a35677eff2d5bc') }, max: { _id: ObjectId('4fd97a4205a35677eff2d7a1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2d986')", lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2d986') }, max: { _id: ObjectId('4fd97a4305a35677eff2db6f') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2dd54')", lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2dd54') }, max: { _id: ObjectId('4fd97a4305a35677eff2df3e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e127')", lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e127') }, max: { _id: ObjectId('4fd97a4305a35677eff2e30d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e4f2')", lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e4f2') }, max: { _id: ObjectId('4fd97a4305a35677eff2e6d8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2e8bf')", lastmod: Timestamp 1000|102, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2e8bf') }, max: { _id: ObjectId('4fd97a4305a35677eff2eaa5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ec89')", lastmod: Timestamp 1000|104, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ec89') }, max: { _id: ObjectId('4fd97a4305a35677eff2ee6d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f052')", lastmod: Timestamp 1000|106, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f052') }, max: { _id: ObjectId('4fd97a4305a35677eff2f239') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f41f')", lastmod: Timestamp 1000|108, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f41f') }, max: { _id: ObjectId('4fd97a4305a35677eff2f603') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2f7e7')", lastmod: Timestamp 1000|110, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2f7e7') }, max: { _id: ObjectId('4fd97a4305a35677eff2f9cd') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2fbb4')", lastmod: Timestamp 1000|112, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2fbb4') }, max: { _id: ObjectId('4fd97a4305a35677eff2fd9a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4305a35677eff2ff82')", lastmod: Timestamp 1000|114, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4305a35677eff2ff82') }, max: { _id: ObjectId('4fd97a4405a35677eff3016a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30351')", lastmod: Timestamp 1000|116, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30351') }, max: { _id: ObjectId('4fd97a4405a35677eff30537') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30721')", lastmod: Timestamp 1000|118, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30721') }, max: { _id: ObjectId('4fd97a4405a35677eff30907') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30aef')", lastmod: Timestamp 1000|120, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30aef') }, max: { _id: ObjectId('4fd97a4405a35677eff30cd5') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff30ebc')", lastmod: Timestamp 1000|122, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff30ebc') }, max: { _id: ObjectId('4fd97a4405a35677eff310a7') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3128e')", lastmod: Timestamp 1000|124, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3128e') }, max: { _id: ObjectId('4fd97a4405a35677eff31473') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3165b')", lastmod: Timestamp 1000|126, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3165b') }, max: { _id: ObjectId('4fd97a4405a35677eff31841') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31a28')", lastmod: Timestamp 1000|128, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31a28') }, max: { _id: ObjectId('4fd97a4405a35677eff31c0d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff31df3')", lastmod: Timestamp 1000|130, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff31df3') }, max: { _id: ObjectId('4fd97a4405a35677eff31fda') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff321bf')", lastmod: Timestamp 1000|132, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff321bf') }, max: { _id: ObjectId('4fd97a4405a35677eff323a4') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4405a35677eff3258c')", lastmod: Timestamp 1000|134, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4405a35677eff3258c') }, max: { _id: ObjectId('4fd97a4505a35677eff32774') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32958')", lastmod: Timestamp 1000|136, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32958') }, max: { _id: ObjectId('4fd97a4505a35677eff32b3d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff32d23')", lastmod: Timestamp 1000|138, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff32d23') }, max: { _id: ObjectId('4fd97a4505a35677eff32f0c') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff330f5')", lastmod: Timestamp 1000|140, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff330f5') }, max: { _id: ObjectId('4fd97a4505a35677eff332d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff334c2')", lastmod: Timestamp 1000|142, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff334c2') }, max: { _id: ObjectId('4fd97a4505a35677eff336ab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4505a35677eff33891')", lastmod: Timestamp 1000|144, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4505a35677eff33891') }, max: { _id: ObjectId('4fd97a4605a35677eff33a77') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff33c5c')", lastmod: Timestamp 1000|146, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff33c5c') }, max: { _id: ObjectId('4fd97a4605a35677eff33e41') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff34026')", lastmod: Timestamp 1000|148, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff34026') }, max: { _id: ObjectId('4fd97a4605a35677eff3420d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff343f3')", lastmod: Timestamp 1000|150, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff343f3') }, max: { _id: ObjectId('4fd97a4605a35677eff345d9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4605a35677eff347c1')", lastmod: Timestamp 1000|152, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4605a35677eff347c1') }, max: { _id: ObjectId('4fd97a4605a35677eff349a9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34b90')", lastmod: Timestamp 1000|154, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34b90') }, max: { _id: ObjectId('4fd97a4705a35677eff34d79') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff34f5f')", lastmod: Timestamp 1000|156, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff34f5f') }, max: { _id: ObjectId('4fd97a4705a35677eff35147') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff3532c')", lastmod: Timestamp 1000|158, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff3532c') }, max: { _id: ObjectId('4fd97a4705a35677eff35511') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff356fa')", lastmod: Timestamp 1000|160, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff356fa') }, max: { _id: ObjectId('4fd97a4705a35677eff358e1') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35ac6')", lastmod: Timestamp 1000|162, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35ac6') }, max: { _id: ObjectId('4fd97a4705a35677eff35cab') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4705a35677eff35e91')", lastmod: Timestamp 1000|164, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4705a35677eff35e91') }, max: { _id: ObjectId('4fd97a4805a35677eff3607a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3625f')", lastmod: Timestamp 1000|166, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3625f') }, max: { _id: ObjectId('4fd97a4805a35677eff36447') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff3662c')", lastmod: Timestamp 1000|168, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff3662c') }, max: { _id: ObjectId('4fd97a4805a35677eff36814') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff369f9')", lastmod: Timestamp 1000|170, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff369f9') }, max: { _id: ObjectId('4fd97a4805a35677eff36be0') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff36dca')", lastmod: Timestamp 1000|172, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff36dca') }, max: { _id: ObjectId('4fd97a4805a35677eff36faf') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37195')", lastmod: Timestamp 1000|174, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37195') }, max: { _id: ObjectId('4fd97a4805a35677eff3737a') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4805a35677eff37560')", lastmod: Timestamp 1000|176, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4805a35677eff37560') }, max: { _id: ObjectId('4fd97a4905a35677eff37747') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3792f')", lastmod: Timestamp 1000|178, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3792f') }, max: { _id: ObjectId('4fd97a4905a35677eff37b15') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff37cff')", lastmod: Timestamp 1000|180, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff37cff') }, max: { _id: ObjectId('4fd97a4905a35677eff37ee8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff380d0')", lastmod: Timestamp 1000|182, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff380d0') }, max: { _id: ObjectId('4fd97a4905a35677eff382b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3849e')", lastmod: Timestamp 1000|184, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3849e') }, max: { _id: ObjectId('4fd97a4905a35677eff38684') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38869')", lastmod: Timestamp 1000|186, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38869') }, max: { _id: ObjectId('4fd97a4905a35677eff38a4e') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff38c32')", lastmod: Timestamp 1000|188, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff38c32') }, max: { _id: ObjectId('4fd97a4905a35677eff38e1d') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff39001')", lastmod: Timestamp 1000|190, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff39001') }, max: { _id: ObjectId('4fd97a4905a35677eff391e8') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff393cf')", lastmod: Timestamp 1000|192, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff393cf') }, max: { _id: ObjectId('4fd97a4905a35677eff395b6') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4905a35677eff3979b')", lastmod: Timestamp 1000|194, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4905a35677eff3979b') }, max: { _id: ObjectId('4fd97a4a05a35677eff39985') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39b6a')", lastmod: Timestamp 1000|196, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39b6a') }, max: { _id: ObjectId('4fd97a4a05a35677eff39d51') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff39f36')", lastmod: Timestamp 1000|198, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff39f36') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a121') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a306')", lastmod: Timestamp 1000|200, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a306') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a4ed') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3a6d3')", lastmod: Timestamp 1000|202, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3a6d3') }, max: { _id: ObjectId('4fd97a4a05a35677eff3a8b9') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] { _id: "test.mrShardedOut-_id_ObjectId('4fd97a4a05a35677eff3aa9d')", lastmod: Timestamp 1000|204, lastmodEpoch: ObjectId('4fd97a640d2fef4d6a507be7'), ns: "test.mrShardedOut", min: { _id: ObjectId('4fd97a4a05a35677eff3aa9d') }, max: { _id: ObjectId('4fd97a4a05a35677eff3ac84') }, shard: "shard0001" }
m30999| Thu Jun 14 01:47:09 [Balancer] ----
m30999| Thu Jun 14 01:47:09 [Balancer] collection : test.mrShardedOut
m30999| Thu Jun 14 01:47:09 [Balancer] donor : 103 chunks on shard0000
m30999| Thu Jun 14 01:47:09 [Balancer] receiver : 103 chunks on shard0000
m30999| Received signal 11
m30999| Backtrace: 0x8381f67 0xea5420 0x81f8974 0x835a481 0x82c3073 0x82c4b6c 0x832f1b0 0x833179e 0x813c30e 0x9d4542 0x40db6e
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo17printStackAndExitEi+0x77)[0x8381f67]
m30999| [0xea5420]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo7BSONObj13extractFieldsERKS0_b+0x114)[0x81f8974]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZNK5mongo12ChunkManager9findChunkERKNS_7BSONObjE+0x1e1)[0x835a481]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer11_moveChunksEPKSt6vectorIN5boost10shared_ptrINS_14BalancerPolicy11MigrateInfoEEESaIS6_EE+0x613)[0x82c3073]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo8Balancer3runEv+0x69c)[0x82c4b6c]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb0)[0x832f1b0]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e)[0x833179e]
m30999| /mnt/slaves/Linux_32bit/mongo/mongos[0x813c30e]
m30999| /lib/i686/nosegneg/libpthread.so.0[0x9d4542]
m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e)[0x40db6e]
m30999| ===
m30000| Thu Jun 14 01:47:09 [conn3] end connection 127.0.0.1:60386 (17 connections now open)
m30000| Thu Jun 14 01:47:09 [conn4] end connection 127.0.0.1:60390 (17 connections now open)
m30000| Thu Jun 14 01:47:09 [conn22] end connection 127.0.0.1:39149 (15 connections now open)
m30000| Thu Jun 14 01:47:09 [conn21] end connection 127.0.0.1:39148 (14 connections now open)
m30000| Thu Jun 14 01:47:09 [conn14] end connection 127.0.0.1:60406 (14 connections now open)
m30001| Thu Jun 14 01:47:10 [conn2] end connection 127.0.0.1:48969 (9 connections now open)
m30001| Thu Jun 14 01:47:10 [conn3] end connection 127.0.0.1:48971 (9 connections now open)
m30001| Thu Jun 14 01:47:10 [conn5] ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| end connection 127.0.0.1:48976 (8 connections now open)
m30001| Thu Jun 14 01:47:10 [conn7] end connection 127.0.0.1:48982 (7 connections now open)
Thu Jun 14 01:47:10 DBClientCursor::init call() failed
Thu Jun 14 01:47:11 query failed : test.$cmd { mapreduce: "foo", map: function map2() {
emit(this._id, {count:1, y:this.y});
}, reduce: function reduce2(key, values) {
return values[0];
}, out: { replace: "mrShardedOut", sharded: true } } to: 127.0.0.1:30999
Thu Jun 14 01:47:11 Error: error doing query: failed src/mongo/shell/collection.js:155
failed to load: /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js
m30000| Thu Jun 14 01:47:11 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Thu Jun 14 01:47:11 [interruptThread] now exiting
m30000| Thu Jun 14 01:47:11 dbexit:
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: going to close listening sockets...
m30000| Thu Jun 14 01:47:11 [interruptThread] closing listening socket: 13
m30000| Thu Jun 14 01:47:11 [interruptThread] closing listening socket: 14
m30000| Thu Jun 14 01:47:11 [interruptThread] closing listening socket: 16
m30000| Thu Jun 14 01:47:11 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: going to flush diaglog...
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: going to close sockets...
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:47:11 [conn6] ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| end connection 127.0.0.1:48979 (5 connections now open)
m30000| Thu Jun 14 01:47:11 [conn16] end connection 127.0.0.1:60409 (12 connections now open)
m30001| Thu Jun 14 01:47:11 [conn9] end connection 127.0.0.1:48988 (4 connections now open)
m30000| Thu Jun 14 01:47:11 [conn15] end connection 127.0.0.1:60408 (11 connections now open)
m30000| Thu Jun 14 01:47:11 [conn18] end connection 127.0.0.1:60412 (12 connections now open)
m30000| Thu Jun 14 01:47:11 [conn19] end connection 127.0.0.1:39146 (12 connections now open)
m30000| Thu Jun 14 01:47:11 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.5, size: 511MB, took 15.169 secs
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: closing all files...
m30000| Thu Jun 14 01:47:11 [interruptThread] closeAllFiles() finished
m30000| Thu Jun 14 01:47:11 [interruptThread] shutdown: removing fs lock...
m30000| Thu Jun 14 01:47:11 dbexit: really exiting now
m30001| Thu Jun 14 01:47:12 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Thu Jun 14 01:47:12 [interruptThread] now exiting
m30001| Thu Jun 14 01:47:12 dbexit:
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: going to close listening sockets...
m30001| Thu Jun 14 01:47:12 [interruptThread] closing listening socket: 17
m30001| Thu Jun 14 01:47:12 [interruptThread] closing listening socket: 18
m30001| Thu Jun 14 01:47:12 [interruptThread] closing listening socket: 19
m30001| Thu Jun 14 01:47:12 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: going to flush diaglog...
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: going to close sockets...
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: closing all files...
m30001| Thu Jun 14 01:47:12 [conn10] ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| ClientCursor::find(): cursor not found in map -1 (ok after a drop)
m30001| end connection 127.0.0.1:48989 (3 connections now open)
m30001| Thu Jun 14 01:47:12 [conn8] end connection 127.0.0.1:48986 (3 connections now open)
m30001| Thu Jun 14 01:47:12 [interruptThread] closeAllFiles() finished
m30001| Thu Jun 14 01:47:12 [interruptThread] shutdown: removing fs lock...
m30001| Thu Jun 14 01:47: 167169.593096ms
Thu Jun 14 01:47:13 got signal 15 (Terminated), will terminate after current cmd ends
Thu Jun 14 01:47:13 [interruptThread] now exiting
Thu Jun 14 01:47:13 dbexit:
Thu Jun 14 01:47:13 [interruptThread] shutdown: going to close listening sockets...
Thu Jun 14 01:47:13 [interruptThread] closing listening socket: 5
Thu Jun 14 01:47:13 [interruptThread] closing listening socket: 6
Thu Jun 14 01:47:13 [interruptThread] closing listening socket: 7
Thu Jun 14 01:47:13 [interruptThread] removing socket file: /tmp/mongodb-27999.sock
Thu Jun 14 01:47:13 [interruptThread] shutdown: going to flush diaglog...
Thu Jun 14 01:47:13 [interruptThread] shutdown: going to close sockets...
Thu Jun 14 01:47:13 [interruptThread] shutdown: waiting for fs preallocator...
Thu Jun 14 01:47:13 [interruptThread] shutdown: closing all files...
Thu Jun 14 01:47:13 [interruptThread] closeAllFiles() finished
Thu Jun 14 01:47:13 [interruptThread] shutdown: removing fs lock...
Thu Jun 14 01:47:13 dbexit: really exiting now
test /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js exited with status 253
50 tests succeeded
41 tests didn't get run
The following tests failed (with exit code):
/mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js 253
Traceback (most recent call last):
File "/mnt/slaves/Linux_32bit/mongo/buildscripts/smoke.py", line 782, in <module>
main()
File "/mnt/slaves/Linux_32bit/mongo/buildscripts/smoke.py", line 778, in main
report()
File "/mnt/slaves/Linux_32bit/mongo/buildscripts/smoke.py", line 490, in report
raise Exception("Test failures")
Exception: Test failures
scons: *** [smokeSharding] Error 1
scons: building terminated because of errors.